New paper: "XMetDB: an open access database for xenobiotic metabolism"

Back in 2013 at the OpenTox conference in Mainz I spoke with Ola, Patrik, and Nina. They were working on a database for CYP metabolism, XMetDB, which I joined on the spot. The database has Open Data, an Application Programming Interface (API), is Open Source, and good amount of experimental detail, like specific enzyme involved and the actual atom mapping of the reaction. A few weeks ago, the paper describing the database was published in the Journal of Cheminformtics (doi:10.1186/s13321-016-0161-3). It's not perfect, but we hope it is a seed for more to follow.

The data, it turns out, is really hard to come by. While I was adding data to the database for most-selling drugs, it was hard to find publications where a human experiment was done (many experiments use rat microsome experiments. Not only makes that hard to identify the specific CYP enzyme, it also is not the human homologue. BTW, since the background of this paper is to create a knowledge base for computational prediction of CYP metabolism, ideally we would even have a specific protein sequence, including any missense SNPs affecting the 3D structure of the enzyme.

However, even for the (at least then) most selling drug aripiprazole, literature was really hard to find! There is a lot of literature just copy/pasting knowledge from other papers, and those other "papers" may in fact be the information sheet you get when you buy the actual drug. Alternatively, personal communication and conference posters can be cited as primary literature too. So, only stressing the importance of a database like this.

At this moment the project is a stalled. None of the currently involved groups has funding for continued development. I guess collaborations are welcome! ChEMBL 22 now was metabolism data for compounds, but I have not explored yet if it has all the details for the transformations needed for XMetDB. At the very least, it may serve as a source of primary literature references.

Spjuth, O., Rydberg, P., Willighagen, E. L., Evelo, C. T., Jeliazkova, N., Sep. 2016. XMetDB: an open access database for xenobiotic metabolism. Journal of Cheminformatics 8 (1). doi:10.1186/s13321-016-0161-3

NanoSafety Cluster presentation: Open Data & NSC Activities

Two weeks ago (already!), the NanoSafety Cluster (NSC) organized two meetings. First, there was on Wednesday afternoon the NSC half-yearly meeting. Second, on Thursday and Friday, in the beautiful Visby on Gotland, the 2nd NanoSafety Forum for Young Scientists. I ran an experiment there, which I will blog about later. Here, please find the slides of my presentation about Open Data I gave on Wednesday:

Oh, and I also presented a few slides about the Working Group 4 activities:

Metabolite identifier mapping databases

Caffeine metabolites. Source: Wikimedia.
If you want to map experimental data to (digital) biological pathways, you need to know what measured datum matches which metabolite in the pathways (that also applies to transcriptomics and proteomics data, of course). However, if a pathways does not have a single database from which identifiers are used, or your analysis platform outputs data with CAS registry numbers, then you need something like identifier mapping. In Maastricht we use BridgeDb for that, and I develop the metabolite identifier mapping databases, which provide the mapping data to BridgeDb, which performs the mapping.

However, identifier mapping for metabolites is non-trivial, and I won't got into details in this post. Instead, the mapping databases that I have been releasing under the CCZero waiver on Figshare use other data sources. When I took over the building of these databases, it used data from the Human Metabolome Database (doi:10.1093/nar/gks1065). It still does. However, I added as data sources to this, ChEBI (doi:10.1093/nar/gkv1031) and Wikidata. The latter I need to support people with, for example, KNApSAcK (doi:10.1093/pcp/pct176).

So, this weekend I released a new mapping database, based on HMDB 3.6, ChEBI 142, and data from Wikidata from September 7. Here are the total number of identifiers and changes compared to June release for the supported identifier databases:

Number of ids in Kd (KEGG Drug): 2013 (unchanged)
Number of ids in Cks (KNApSAcK): 4357 (unchanged)
Number of ids in Ik (InChIKey): 52337 (unchanged)
Number of ids in Ch (HMDB): 41520 (6 added, 0 removed -> overall changed +0.0%)
Number of ids in Wd (Wikidata): 22648 (195 added, 10 removed -> overall changed +0.8%)
Number of ids in Cpc (PubChem-compound): 30699 (154 added, 36 removed -> overall changed +0.4%)
Number of ids in Lm (LIPID MAPS): 2611 (unchanged)
Number of ids in Ce (ChEBI): 131580 (4 added, 6 removed -> overall changed -0.0%)
Number of ids in Ck (KEGG Compound): 15968 (unchanged)
Number of ids in Cs (Chemspider): 24948 (10 added, 2 removed -> overall changed +0.0%)
Number of ids in Wi (Wikipedia): 4906 (unchanged)

An overview of recent releases (I'm trying to keep a monthly schedule) can be found here and the version I release this weekend has doi:10.6084/m9.figshare.3817386.v1.

Doing science has just gotten even harder

Annotation of licenses of life science
databases in Wikidata.
Those following me on Twitter may have seen the discussion this afternoon. A weird law case went to the European court, which sent our their ruling today. And it's scary, very scary. The details are still unfolding and several media have written about it earlier. It's worth checking out for everyone doing research in Europe, particularly if you are a chem- or bioinformatician. I may be wrong in my interpretation, and hope to be, but hope even more to be proven wrong soon, but fear it will not be soon at all. The initial reporting I saw was in a Dutch news outlet, but I was pointed by Sven Kochmann to this press release from the Court of Justice of the European Union. Worth reading. I will need to write more about this soon, to work out the details why this may turn out disastrous for European research. For now, I will quote this part of the press release:
    Furthermore, when hyperlinks are posted for profit, it may be expected that the person who posted such a link should carryout the checks necessary to ensure that the work concerned is not illegally published.
I stress this is only part of the full ruling, because the verdict is on a combination of arguments. What this argument does, however, is turn around some important principle: you have to proof you are not violating copyright.

Now, realize that in many European Commission funded projects, with multiple partners, sharing IP is non-trivial, ownership even less (just think about why traditional publishers require you to reassign copyright to them! BTW, never do that!), etc, etc. A lot of funding actually goes to small and medium sized companies, who are really not waiting for more complex law, nor more administrative work.

A second realization is that few scientists understand or want to understand copyright law. The result is hundreds of scholarly databases which do not define who owns the data, nor under what conditions you are allowed to reuse it, or share, or reshare, or modify. Yet scientists do. So, not only do these database often not specify the copyright/license/waiver (CLW) information, the certainly don't really tell you how they populated their database. E.g. how much they copied from other websites, under the assumption that knowledge is free. Sadly, database content is not. Often you don't even need wonder about it, as it is evident or even proudly said they used data from another database. Did they ask permission for that? Can you easily look that up? Because you are now only allowed to link to that database until you figured out if they data, because of the above quoted argument. And believe me, that is not cheap.

Combine that, and you have this recipe for disaster.

A community that knows these issues very well, is the open source community. Therefore, you will find a project like Debian to be really picky about licensing: if it is not specified, they won't have it. This is what is going to happen to data too. In fact, this is also basically why eNanoMapper is quite conservative: if it does not get clear CLW information by the rightful owner (people are more relaxed with sharing data from others, than their own data!), it is not going to be included in the output.

IANAL, but I don't have to be to see that this will only complicate matters, and the last thing that will do is help the Open Data efforts of the European Commission.

I have yet to figure out what this means for my Linked Data work. Some databases do great work and have very clear CLW information. Think ChEMBL, WikiPathways, and also Open PHACTS did a wonderful job in tracking and propagating this CLW information. On the other hand, Andra Waagmeester did an analysis of database license information of life sciences databases and note the number of 'free content' and 'proprietary' databases (top right figure), which are the two categories of databases where the CLW info is not really clear. How large the problem is with illegal content in those databases (e.g. text mined from literature, screenscraped from another database), who knows, but I can tell you this is not insignificant, unless you think it's 99%.

At the same time, of course, the solution is very simple. Only use and link to websites with clear CLW information and good practices. But that rules out many of the current databases, but also supplementary information, where, even more than in databases, the rules of copyright are ignored by scientists.

And, honestly, I cannot help but wonder what all the publishers will now do with all the articles published in the past 20 years with hyperlinks in them. I hope for them it doesn't link to illegal material. Worse, the above quoted argument will have to make sure, none(!) of those hyperlinks point to material with unclear copyright.

I'll end this post with a related Dutch law (well, at least for the sake of this post). If you buy second hand goods, and the price is less than something like 1/3rd of the new price, you must demand the original receipt of the first buy. Because if not provided, you are legally assumed to realize it is probably stolen. How will that translate to this situation? If the linked scientific database is less then 1/3rd of the cost of the commercial alternative, you may assume it is illegal? Fortunately, this argumentation does not apply.

Problem is, there are enough "smart" people that misuse weird laws and ruling like this to make money. Think of the patent trolls, or about this:
What can possibly go wrong?

Elsevier launches

Elsevier (RELX Group) has seen a lot of publicity this week again. After the patent on peer review earlier this week, today I learned from Max Kemman about the website. This is great! Finding data (think FAIR, doi:10.1038/sdata.2016.18) is hard. Elixir Europe aims at fixing this, and working on open standards to have data explain itself, e.g. adoption of But an entry point that finds information is still very much welcome. Like the search interface for eNanoMapper that indexes information from multiple data sources (well, two at this moment, including the server).

For scientific information this doesn't exist; we have to do with tools like Google Scholar and Google Images. Both are pretty brilliant and allow you to filter on things, besides your regular keyword search. Of course, what we really need is an ontology-backed search, which Google seamlessly integrates under the hood, e.g. using the aforementioned

Now, particularly for my teaching roles, I am frequently looking for material for slides, to support my message. Then, Google Images is great, as it allows me to filter for images that I am allowed to use, reuse, and even modify (e.g. highlight part of the image). Now, I know that some jurisdictions (like the USA) have more elaborate rules about fair use in education, but these rules are too often challenged and money, DRM, etc, limit those rights. Let alone scary, proposed European legislation (follow Julia Reda!).

So, I very much welcome this new effort! Search engine have a better track record than catalogs, like the Open Knowledge Foundation's DataHub. Of course, some repositories are getting so large, like FigShare, to a large extend by very active population by publishers like PLOS, they may soon become a single point of entry.

Anyway, Elsevier is looking for peer-review, which I give them for free (like I gave them free peer reviews until they crossed an internal, mental line, see The Cost of Knowledge). I can only hope that I am not violating their patent. Oh, and please don't look at the HTML of the website. You would certainly be violating their Terms of Use. They really need to talk to their lawyers; they're making a total mess of it.

cAMP as a signalling compound?

cAMP. Picture from Wikipedia.
Maastricht University gives me the opportunity to study how chemical differences between individuals affect the metabolism, particularly for humans (I'm a chemist working in biology). Reading biological literature and text books sometimes makes my jaw drop. Biology is beautifully complex and sometimes just doesn't make sense at all.

So, in my WTF-moment of the day, I was reading about various RNA, then nucleotides, etc, and got to cAMP. This, and I know that from WikiPathways too, can act as a secondary signalling compound: membrane receptor passes the signal on to cAMP. But then? I mean, one single molecule. Supposed to give a variety of signals. How?? How can it be selective? How is the hormone-specific signal not lost when passing the cytoplasma?? Or is it just a general "ALERT ALERT, SOMETHING OUTSIDE HAPPENED"?

Back to the book.

Alzheimer’s disease, PaDEL-Descriptor, CDK versions, and QSAR models

A new paper in PeerJ (doi:10.7717/peerj.2322) caught my eye for two reasons. First, it's nice to see a paper using the CDK in PeerJ, one of the journals of an innovative, gold Open Access publishing group. Second, that's what I call a graphical abstract (shown on the right)!

The paper describes a collection of Alzheimer-related QSAR models. It primarily uses fingerprints and the PaDeL-Descriptor software (doi:10.1002/jcc.21707) for it particularly. I just checked the (new) PaDeL-Descriptor website and it still seems to use CDK 1.4. The page has the note "Hence, there are some compatibility issues which will only be resolved when PaDEL-Descriptor updates to CDK 1.5.x, which will only happen when CDK 1.5.x becomes the new stable release." and I hope Yap Chun Wei will soon find time to make this update. I had a look at the source code, but with no NetBeans experience and no install instructions, I was unable to compile the source code. AMBIT is now up to speed with CDK 1.5, so the migration should not be too difficult.

Mind you, PaDEL is used quite a bit, so the impact of such an upgrade would be substantial. The Wiley webpage for the article mentions 184 citations, Google Scholar counts 369.

But there is another thing. The authors of the Alzheimer paper compare various fingerprints and the predictive powers of models based on them. I am really looking forward to a paper where the authors compare the same fingerprint (or set of descriptors) but with different CDK versions, particularly CDK 1.4 against 1.5. My guess is that the models based on 1.5 will be better, but I am not entirely convinced yet that the increased stability of 1.5 is actually going to make a significant impact on the QSAR performance... what do you think?

Simeon, S., Anuwongcharoen, N., Shoombuatong, W., Malik, A. A., Prachayasittikul, V., Wikberg, J. E. S., Nantasenamat, C., Aug. 2016. Probing the origins of human acetylcholinesterase inhibition via QSAR modeling and molecular docking. PeerJ 4, e2322+. 10.7717/peerj.2322

Yap, C. W., May 2011. PaDEL-descriptor: An open source software to calculate molecular descriptors and fingerprints. Journal of Computational Chemistry 32 (7), 1466-1474. 10.1002/jcc.21707

The Groovy Cheminformatics scripts are now online

My Groovy Cheminformatics with the Chemistry Development Kit book sold more than 100 times via now. An older release can be downloaded as CC-BY from Figshare and was "bought" 39 times. That does not really make a living, but does allow me to financially support CiteULike, for example, where you can find all the references I use in the book.

The content of the book is not unique. The book exists for convenience, it explains things around the APIs, gives tips and tricks. In the first place for myself, to help me quickly answer questions on the cdk-user mailing list. This list is a powerful source of answers, and the archive covers 14 years of user support:

One of the design goals of the book was to have many editions allowing me to keep all scripts updated. In fact, all scripts in the book are run each time I make a new release of the book, and, therefore, which each release of the CDK that I make a book release for. That also explains why a new release of the book currently takes quite a bit of time, because there are so many API updates at the moment, as you can read about in the draft CDK 3 paper.

Now, I had for a long time also the plan to make the scripts freely available. However, I never got around to making the website to go with that. I have given up on the idea of a website and now use GitHub. So, you can now, finally, find the scripts for the two active book releases on GitHub. Of course, without the explanations and context. For that you need the book.

Happy CDK hacking!

Use of the BridgeDb metabolite ID mapping database in PathVisio

A long time ago Martijn van Iersel wrote a PathVisio plugin that visualizes 2D chemical structures of metabolites in pathways as found on WikiPathways. Some time ago I tried to update it to a more recent CDK version, but did not have enough time at the time to get it going. However, John May's helpful DepictionGenerator made it a lot easier, so I set out this morning in updating the code base to use this class and CDK 1.5.13 (well, strictly speaking it's running a prerelease (snapshot) of CDK 1.5.14). With success:

The released version is a bit more tweaked and shows the 2D structure diagram more filling the Structure tab. I have submitted the plugin to the PathVisio Plugin Repository.

Now, you may know that these GPML pathways only contain identifiers, and no chemical structures. But this is where the metabolite identifier mapping database helps (doi:10.6084/m9.figshare.3413668.v1): it contains SMILES strings for many of the compounds. It does not contains SMILES string from Wikidata, but I will start adding those in upcoming releases too. The current SMILES strings come from HMDB.

To show how all this works, check out the below PathVisio screenshot. The selected node in the pathway has a label uracil and the left most front dialog was used to search in the metabolite identifier mapping database and it found many hits in HMDB and Wikidata (middle dialog). The Wikidata identifier was chosen for the data node, allowing PathVisio to "interpret" the biological nature of that node in the pathway. However, along with many mapped identifiers (see the Backpage on the right), this also provides a SMILES that is used by the updated ChemPaint plugin.

Setting up a local SPARQL endpoint

... has never been easier, and I have to say, with Virtuoso it already was easy.

Step 1: download the jar and fire up the server
OK, you do need Java installed, and for many this is still the case, despite Oracle doing their very best to totally ruin it for everyone. But seriously, visit the Blazegraph website (@blazegraph) and download the jar and type:

$ java -jar blazegraph.jar

It will give some output on the console, including a webpage with SPARQL endpoint, upload form etc.

That it tracks past queries is a nice extra.

Step 2: there is no step two

Step 3: OK, OK, you also want to try a SPARQL from the command line
Now, I have to say, the webpage does not have a "Download CSV" button on the SPARQL endpoint. That would be great, but doing so from the command line is not too hard either.

$ curl -i -H "Accept: text/csv" --data-urlencode \

But it would be nice if you would not have to copy/paste the query into a file, or go to the command line in the first place. Also, I had some trouble finding the correct SPARQL endpoint URL, as it seems to have changed at least twice in recent history, given the (outdated) documentation I found online (common problem; no complaint!).

HT to Andra who first mentioned Blazegraph to me, and the Blazegraph team.