Do a postdoc with eNanoMapper

CC-BY-SA from Zherebetskyy @ WP.
Details will still have to follow as they are being worked out, but with Cristian Munteanu having accepted an associate professorship, I need a new postdoc to fill his place, and I am reopening the position I had almost a year ago. Do you like to works in a systems biology group (BiGCaT), are pro Open Science, like to work on tools for safe-by-design nanomaterials, and have skills in one or more of bioinformatics, chemoinformatics, statistics, coding, ontologies, then this position may be something for you.

The primary project for this position is eNanoMapper and you will be working within the large European NanoSafety Cluster network, though interactions are not limited to the EU.

If you have interest and cannot wait until the details of the position come out, please send me an email. first.lastname @ maastrichtuniversity.nl. General questions about eNanoMapper and our BiGCaT solutions for nanosafety are also welcome in the comments.

CDK: Element and Isotope information

When reading files the format in one way or another has implicit information you may need for some algorithms. Element and isotope information is a key example. Typically, the element symbol is provided in the file, but not the mass number or isotope implied. You would need to read the format specification what properties are implicitly meant. The idea here is that information about elements and isotopes is pretty standardized by other organizations such as the IUPAC. Such default element and isotope properties are exposed in the CDK by the classes Elements and Isotopes. I am extending my Groovy Cheminformatics with the CDK with these bits.

Elements
The Elements class provides information about the element's atomic number, symbol, periodic table group and period, covalent radius and Van der Waals radius, and Pauling electronegativity (Groovy code):

Elements lithium = Elements.Lithium
println "atomic number: " + lithium.number()
println "symbol: " + lithium.symbol()
println "periodic group: " + lithium.group()
println "periodic period: " + lithium.period()
println "covalent radius: " + lithium.covalentRadius()
println "Vanderwaals radius: " + lithium.vdwRadius()
println "electronegativity: " + lithium.electronegativity()

For example, for lithium this gives:

atomic number: 3
symbol: Li
periodic group: 1
periodic period: 2
covalent radius: 1.34
Vanderwaals radius: 2.2
electronegativity: 0.98

Isotopes
Similarly, there is the Isotopes class to help you look up isotope information. For example, you can get all isotopes for an element or just the major isotope:

isofac = Isotopes.getInstance();
isotopes = isofac.getIsotopes("H");
majorIsotope = isofac.getMajorIsotope("H")
for (isotope in isotopes) {
  print "${isotope.massNumber}${isotope.symbol}: " +
    "${isotope.exactMass} ${isotope.naturalAbundance}%"
  if (majorIsotope.massNumber == isotope.massNumber)
    print " (major isotope)"
  println ""
}

For hydrogen this gives:

1H: 1.007825032 99.9885% (major isotope)
2H: 2.014101778 0.0115%
3H: 3.016049278 0.0%
4H: 4.02781 0.0%
5H: 5.03531 0.0%
6H: 6.04494 0.0%
7H: 7.05275 0.0%

This class is also used by the getMajorIsotopeMass() method in the MolecularFormulaManipulator class to calculate the monoisotopic mass of a molecule:

molFormula = MolecularFormulaManipulator
  .getMolecularFormula(
    "C2H6O",
    SilentChemObjectBuilder.getInstance()
  )
println "Monoisotopic mass: " +
  MolecularFormulaManipulator.getMajorIsotopeMass(
    molFormula
  )

The output for ethanol looks like:

Monoisotopic mass: 46.041864812

CDK 1.5.8, Zenodo, GitHub, and DOIs

Screenshot from John blog post.
John released CDK 1.5.8, which has a few nice goodies, like a new renderer. The full changelog is available. Interesting aspect of this release is, that it uses one ZENODO to make the release citable with a DOI. And that is relevant because it simplifies (making it a lot cheaper!) to track the impact of it, e.g. with #altmetrics. And that matters too, because no one really has a clue on how to decide which scientist is better than another, and which scientist should and should not get funding. Where we know peer review of literature is severely limited, we happily accept it to determine career future.

Anyways, so, we have a DOI now for a CDK release. So, everything using the CDK in research can cite this specific CDK release in their papers with this DOI. Of course, most publishers still don't support providing reference lists as a list of DOIs and often do not show the DOI there, but all this is a step forward. John listed the DOI with a nicely ZENODO-provided icon in the release post.

If you follow the DOI you go to the ZENODO website (they effectively act as a publishing platform). It is this page that I want to continue talking about, and in particular about the list of authors. The webpage provides two alternatives. The first is the most prominent one if you visit the page first:


This looks pretty good, I think. It seems to have picked up a list of authors, and looking at the list, not from the standard AUTHORS file, but from the commit messages. However, that is unsuited for the CDK, with a repository history in CVS, via SVN, to Git, but only the latter show up. The list seems sorted by the amount of contributions, but note that Christoph is missing. His work predates the Git era.

The second list of "authors" is given in the bottom right of the page, as "Cite As":


This suggestion is different, though it seems reasonable to assume the et al. (missing dot) refers to the rest of the authors of the first list. In the BibTeX export the full author list shows up again, supporting that idea.

Correct citation?
This makes me wonder: whom are the authors of this release? Clearly, this version includes code from all authors in some sort of way. Code from some original authors may have been long replaced with newer code. And we noted the problem of missing authors, because of the right version control history.

An alternative is to consider this release as a product of those people who have contributed patches since the previous release. In fact, this is something we noted as important in the past and now always report when making a release. For the 1.5.8 release that list looks like:


That is, this approach basically takes an accepted approach in publishing: papers describing updates of running projects involve only the people that contributed to that released work.

Therefore, I think the proper citation for this CDK 1.5.8 release should be:
    John May, Egon Willighagen, Mark Vine, Oliver Stücker, Andy Howlett, Mark Williamson, Sambit Gaan, Alison Choy (2014). cdk: CDK Release 1.5.8. ZENODO. 10.5281/zenodo.11681
Also note the correct spelling of the author names, though one can argue that they should have correctly spelled their names in the Git commit messages. Here are some challenges for GitHub in adopting the ORCID, I guess.

The question is, however, how do we get ZENODO to do this they way we want it to do? I think the above citation makes much more sense, but others may have good reasons why the current practice is better. What should ZENODO pick up to get the author provenance from?

Open knowledge dissemination (with @IFTTT)

An important part of science is communication. That is why we publish. New insights are useless if they sit on some desk. Instead, reuse counts. This communication is not just about the facts, but also a means to establish research networks. Efficient research requires this: you cannot be an expert in everything or at least not be experienced with everything. That is, for most things you do, there is another researcher that can do it faster. This is probably one of the reasons why many Open Science projects actually work, despite limited funding: they are very efficient.

Readers of my blog know a bit about my research, and know how important data exchange is to me. But similarly, allowing people to know what I do. You can see that in my literature: I strive towards knowledge integration (think UserScripts, think CMLRSS, think Linked Open Drug Data) and efficient methods for data exchange. Just because I need this to get statistically significant patterns. After all, my background is chemometrics primarily. Cheminformatics was my hobby, explaining the mashup.

FriendFeed was a brilliant platform for disseminating research and also for exchange of data. Actually, it is a brilliant platform, but when they sold themselves to FaceBook, it got a lot quieter there. And, as said, communication needs a community, and without listeners it is just not the same. Scientists just moved to different social platforms, and it is no surprise FriendFeed didn't show up in Richard van Noorden's recent analysis. A lot of good things happened on FriendFeed, but one was that it used RSS feeds and users could indicate which information sources they liked to show up there. Try my FriendFeed account. Better even was that listeners could select which of my information sources they do not want to listen to. For example, if you were not interested in Flickr images of Person X, but the others sources were interesting, you just silenced that source. Brilliant!

But this feature of using RSS to aggregate dissemination channels is not repeated by other networks, and If This Then That fills that gap. Unlike FriendFeed it does not aggregate it, but send the items to external social networks (and many other systems), including FaceBook and Twitter. It does a lot more than RSS feeds (e.g. check out the Android app), but that is an important one for me and the point of this blog post.

Actually, they have taking the idea of sending around events to a next level, allowing you to tune how it shows up. An example of that is what you see in the screenshot: I played with how news items from the RSS with changes I made to WikiPathways are shown. After a few iterations, I ended up with this "recipe":


The grey boxes is information from the RSS feed. The first iteration (see bottom tweet in the screenshot) only contained a custom perfix "Wikipathways edit:" followed by the {{EntryTitle}} and {{EntryUrl}}. I realized that would the commit message it would not be fun, and added the {{EntryContent}} (one bot last tweet in screenshot). Then I realized that having "Pathway" twice (once from my prefix, once from the {{EntryTitle}}, was not nice to look at either, and I ended up with the above "Wiki{{EntryTitle}} edit:", as visible in the top tweet in the screenshot. Too bad the RSS feed of WikiPathways doesn't have graphics :(

At this moment I am using a few outlets: I use Twitter to send about anything, like I did with FriendFeed. Sadly, Twitter doesn't have the same power to select which tweets you like to listen to. Not interested in the changes I make to WikiPathways? Sorry, you'll have to live with it. Well, you can also try my FaceBook account, where I route fewer things. But there you can like and comment, but I will not respond.

Anyway, my message is, give IFTTT a try!

First steps in Open Notebook Science

Scheme 2 from this Beilstein Journal of Organic
Chemistry paper
by Frank Hahn et al.
I blogged a few weeks back I blogged about my first Open Notebook Science entry. The post suggest I will look at a few ONS service providers, but, honestly, Open Notebook Science Network serves my needs well.

What I have in mind, and will soon advocate, is that the total synthesis approach from organic chemistry fits chem- and bioinformatics research. It may not be perfect, and perhaps somewhat artificial (no pun intended), but I like the idea.

Compound to Compound
Basically, a lab notebook entry should be a step of something larger. You don't write Bioclipse from scratch. You don't do a metabolomics pathway enrichment analysis in one step, either. It's steps, each one taking you from one state to another. Ah, another nice analogy (see automata theory)! In terms of organic chemistry, from one compound to another. The importance here is that the analogy shows that there is no step you should not report. The same applies to cheminformatics: you cannot report a QSAR model without explaining how your cleaned up that SDF file you got from paper X (which still commonly is practised).

Methods Sections
Organic chemistry literature has well-defined templates on how to report the method for a reaction, including minimal reporting standards for the experimental results. For example, you must report chemical shifts, an elemental composition. In cheminformatics we do not have such templates, but there is no reason not too. Another feature that must be reported is the yield.

Reaction yield
The analogy with organic chemistry continues: each step has a yield. We must report this. I am not sure how, and this is one of the things I am exploring and will be part of my argument. In fact, the point of keeping track of variance introduced is something I have been advocating for longer. I think it really matters. We, as a research field, now publish a lot of cheminformatics and chemometrics work, without taking into account the yield of methods, though, for obvious reasons, very much more in chemometrics than in cheminformatics. I won't go into that now, but there is indeed a good part of benchmark work, but the point is, any cheminformatics "reaction" step should be benchmarked.

Total synthesis
The final aspect is, is that by taking this analogy, there is a clear protocol how cheminformatics, or bioinformatics, work must be reported: as a sequence of detailed small steps. It also means that intermediate "products" can be continued with in multiple ways: you get a directed graph of methods you applied and results you got.

You get something like this:

Created with Graphviz Workspace.

The EWx codes refer to entries in my lab notebook:
  1. EW4: Finding nodes in Anopheles gambiae pathways with IUPAC names
  2. EW5: Finding nodes in Homo sapiens pathways with IUPAC names
  3. EW6: Finding nodes in Rattus norvegicus pathways with IUPAC names
  4. EW7: converting metabolite Labels into DataNodes in WikiPathways GPML

Open Notebook Science
Of course, the above applies also if you do not do Open Notebook Science (ONS). In fact, the above outline is not really different from how I did my research before. However, I see value in using the ONS approach here. By having it Open, it

  1. requires me to be as detailed as possible
  2. allows others to repeat it
Combine this with the advantage of the total synthesis analogy:
  1. "reactions" can be performed in reasonable time
  2. easy branching of the synthesis
  3. clear methodology that can be repeated for other "compounds
  4. step towards minimal reporting standards for cheminformatics methods
  5. clear reporting structure that is compatible with journal requirements
OK, that is more or less the paper I want to write up and submit to the Jean-Claude Bradley Memorial Issue in the Journal of Cheminformatics and Chemistry Central. It is an idea, something that helps me, and I hope more people find useful bits in this approach.

On Open Access in The Netherlands

Yesterday, I received a letter from the Association of Universities The Netherlands (VSNU, @deVSNU) about Open Access. The Netherlands is for research a very interesting country: it's small, meaning we have few resources to establish and maintain high profile centers, we also believe strong education benefits from distribution, so we we have many good universities, rather than a few excelling universities. Mind you, this clouds that we absolutely do have excelling research institutes and research groups; they just are not concentrated in one university.

Another important aspect is that all those Dutch universities are expected to compete which each other for funding. As a result I have experience rather interesting collaborations between universities. That's a downside of a small country: everyone knows each other, often in way to much detail. But my point is that the Dutch can be rather conservative. That kills innovation, and is in my opinion a key reason why we are not breaking into the top 50 of rankings, more than concentration. Concentration of funding in Top research institutes has not been extensively evaluated, but I think the efficiency is not proven higher than previous funding approaches.

Anyway, this letter I received is part of their Open Access program. Here too, the Dutch universities are conservative (well, relatively from my views, at least). Now, the Open Access debate is not so interesting, because it primarily ends up about who pays who (boring) and whether we should go gold or green (besides the point, see below), and, sadly, here too many people think about who pays who again (still boring).

Therefore, giving the outlined importance and impact of Dutch research, I found it relevant to post about the progress of Open Access in my small country. The letter is available in English.

Basically, the letter is an answer to an earlier letter from our government about Open Access, and it warns about actions that will soon be undertaken (so, not really pro-active). However,
    "[they] are also appealing to you to continue to advocate free access to your own scientific publications."
Well, I have, not so actively, and maybe this post can be the start of a change. Because what basically bothers me is that the Open Access discussion, also in The Netherlands, is biased. And indeed, the letter continues with a section about gold and green access. If the VSNU really wants to promote free access to research, it should not even accept green. We all know that it is not about being able to look at (free), but to be able to mix and improve. Reuse. Continue. Stand on shoulders. The fact that this letter focuses on publications only, does not spend a word on reuse, is rather depressing and not giving me even the slightest hint that The Netherlands will break into that Top 50 any time soon.

Overall, the latter is relatively positive for the Open Access movement, though reactive. They still have some explanation to do:
    "The golden route is more complex. However, many believe that in the end it is a
    more sustainable route to Open Access."
(Or maybe readers can explain me what is complex about the golden route?)

The following is a rather interesting section, but really only when they had focused on Open Access in its pure form that allows research reuse. I think it now leaves you with a low starting point bargaining with resistant publisher lawyers and managers that have long lost the interest of the academics in favor of that of the share holders:
    For the past ten years, publishers have been offering journals in package deals referred to as Big Deals. Shortly negotiations with the major publishers about these Big Deals Will take place, including Elsevier, Springer and Wiley. The Dutch universities have expressed their wish to make agreements with these publishers about the transition to Open Access as part of those Big Deals. Universities expect publishers to take serious steps to facilitate that transition.
I hope the VSNU will clarify with what they mean with "serious". Because they all came up with "me too" solutions (setting up new OA journals) without seriously changing their model. No large publisher dared making the flagship journals full gold Open Access. That is serious business; all we see now is scribbling in the margin.

Perhaps that is the reason of the wish to be in the top 50. Maybe the VSNU just wants a better bargaining position.

The letter ends with what researchers can do. And with that, they are spot on:
    As a researcher, you can play a vital role in the transition to Open Access. We have 
    mentioned the possibility of depositing arlídes in the repository of your own
    university. But there is more. It’s important to consider that researchers play a key 
    role in the publishing process: as providers of the scientific content, as reviewers 
    and as members of editorial and advisory boards. We hope that where ever possible, 
    you will ask publishers to convert to an Open Access model.
What any researcher can already do to promote (proper) Open Access:

  1. stop reviewing publishing closed-access papers (you have way too much review requests already, and some filtering will not hurt you)
  2. stop reviewing publishing for non-gold Open Access journals (step further than the first item)
  3. submit only to full-gold Open Access journals (plenty of options; importantly, the quality and impact of your paper is not dependent on the journal, but on you. if not, you're just a bad author and researcher and should go back to school or start learning from feed back on your Open Notebook Science, so that you improve your act before you submit; really, it happens to the best of us: multidisciplinary research is hard: you cannot excel in biology and chemistry and statistics and informatics and computer science and data analysis and materials science and as perfect and creative linguistic (well, not all of us, anyway))
  4. put your previous mistakenly closed-access papers in university repositories (most Dutch universities have solutions; not all yet)
  5. make previously published closed-access papers gold Open Access (yes, you can! I am in the process of doing this for the CDK I paper, and other ACS papers will follow)
  6. get an ORCID
  7. use #altmetrics to see that gold Open Access gives you more impact for your papers too (service providers include ImpactStory, Altmetric.com, Plum Analytics, etc)
Of course, it is not only about publications. Again, the VSNU would do good to learn that research is not the same as publications. Besides sending letters, I think the VSNU can do this to promote Open Science, which is what I hope they are after:
  1. negotiate with the government and major science and funding agencies (KNAW, NWO) to stop focusing on publications as primary output
  2. start focusing on output other than publications (e.g. data sets, software) even if you have not ended negotiations with other, just to set a proper example
  3. make research outcomes machine readable (read this interesting post from our national library)
  4. actively explore business models around Open Science (and not have your universities' spin-off departments only know about patent law, ignore the rest of the world)
  5. adopt the ORCID nation wide, staring Jan 2015
  6. start using #altmetrics to get a better perspective of the performance of your members
Of course, I am more than willing to help the VNSU with this transition. I can be reached at the Department of Bioinformatics - BiGCaT, NUTRIM, FHML, Maastricht University. There are many options I have missed here (like data repositories, data citing, DOIs, and whatever).


PS. my ImpactStory profile will tell you that more than 80% of my publications are Open Access. Not all gold yet, but I am working on changing that for some old papers.

Open Notebook Science ONSSP #1: http://onsnetwork.org/

As promised, I slowly set out to explore ONSSPs (Open Notebook Science Service Providers). I do not have a full overview of solutions yet but found LabTrove and Open Notebook Science Network. The latter is a more clear ONSSP while the first seems to be the software.

So, my first experiment is with Open Notebook Science Network (ONSN). The platform uses WordPress, a proven technology. I am not a huge fan of the set up which has a lot of features making it sometimes hard to find what you need. Indeed, my first write up ended up as a Page rather than a Post. On the upside, there is a huge community around it, with experts in every city (literally!). But my ONS is now online and you can monitor my Open research with this RSS feed.

One of the downsides is that the editor is not oriented at structured data, though there is a feature for Forms which I may need to explore later. My first experiment was a quick, small hack: upgrade Bioclipse with OPSIN 1.6. As discussed in my #jcbms talk, I think it may be good for cheminformatics if we really start writing up step-by-step descriptions of common tasks.

My first observations are that it is an easy platform to work with. Embedding images is easy, and there should be option for chemistry extensions. For example, there is a Jmol plugin for WordPress, there are plugins for Semantic Web support (no clue which one I would recommend), an extensions for bibliographies are available too, if not mistaken. And, we also already see my ORCID prominently listed, and I am not sure if I did this, or whether this the ONSN people added this as a default feature.

Even better is the GitHub support @ONScience made me aware of, by @benbalter. The instructions were not crystal clear to me (see issues #25 and #26), some suggested fixes (pull request #27), it started working, and I now have a backup of my ONS at GitHub!

So, it looks like I am going to play with this ONSSP a lot more.

Open Notebook Science: also for cheminformatics

Last Monday the Jean-Claude Bradley Memorial Symposium was held in Cambridge (slide decks). Jean-Claude was a remarkable man and I spoke at the meeting on several things and also how he made me jealous with his Open Notebook Science work. I had the pleasure to work with him on a RDF representation of solubility data.

It took me a long time to group my thoughts and write the abstract I submitted to the meeting:
    I always believed that with Open Data, Open Source, and Open Standards I was doing the right thing; that it was enough for a better science. However, I have come to the realization that these features are not enough. Surely, they aid Open collaborations, though not even sufficient there, but they fail horribly in the "scientific method." Because while ODOSOS makes work reproducible, it lacks the context needed by scholars to understand what it solved. That is, it details out in much detail how some scientific question is answered, but not what question that was. As such, it fails to follow the established practices in scholarly research. In this presentation I will show how I should have done some of my research, and ponder on reasons why I had not done so.
And it also took me a long time and a lot of stress to get together some slides, but I managed in the end:



During the talk I promised to start doing Open Notebook Science (ONS) for my research, and I am currently exploring ONS platforms.

The meeting itself was great. There was a group of about 40 people in Cambridge and another 15 online, and most of them into Open Science or at least wanting to learn what it is about. I met old friends and new people, including a just-graduated Maastricht Science Programme student (one that I did not have in my class last year). Coverage on Twitter was pretty good (using the #jcbms hashtag, an archive) with some 90 people using the hashtag.
Several initiatives seem to be evolving, including an ONS initiative and a memorial special issue. All these will need to help from the community. The time is right.

#JChemInf Volume 5 as PDF on @FigShare

One of the things I do to prepare for holiday, is get some reading stuff together. I haven't finished Gödel, Escher, Bach yet (a suggested from the blogosphere), with a bit of luck there are new chapters of HPMOR, and I normally try to catch up with literature. One advantage of Open Access is that you can remix. So, I created a single PDF of all JChemInf Vol. 5 articles (last year I did volumes 1, 2, 3, and 4). This PDF is about 75 MB in size, and therefore fits on most smartphones. The PDF has an index, but doesn't have entries for each paper, but jumping from abstract to abstract works fine. It has a bit over fifty peer-reviewed papers.

Another advantage of Open Access is that you can reshare. And so I did, and the volumes are available from FigShare:
  1. JChemInf Vol.1
  2. JChemInf Vol.2
  3. JChemInf Vol.3
  4. JChemInf Vol.4
  5. JChemInf Vol.5
Of course, a clear downside it, is that it interferes with #altmetrics. And, I am wondering if a similar thing can be done with ePubs.

Journal Open Data Guidelines: plenty of room for clarifications

J. Gray, Wikipedia. CCZero.
Several journals are playing with statements about Open Data, and, for example, F1000Research and require Open Data. When publishers are judged in their implementation on Open Access, so should we critically analyze journals that claim to be an Open Data journal. Well, such claims I have not seen, but some journals have promising statements, like:
BioMed Central
    Data associated with the article are available under the terms of the CCZero.
However, this claim is vague, or, at least, too vague for a paper I am currently reviewing. The fuzziness lies in the word "associated". What defines associated data? How does this relate to reproducibility? If the purpose of Open Data is that the results of the paper can be reproduced, it means all data? And what happens if some of the data is from a previous paper? Or from a proprietary database? Is a paper that has data from proprietary database as key steps in the argumentation acceptable to a data that demands Open associated Data? What if the authors do not have control over the the license? Or is it limited to new data? But what defines new data here? Because it is a really hard question in an era where data has very limited provenance (versioning, author attribution, etc).