Bioclipse 2.6.2 with recent hacks #1: Wikidata & Linked Data Fragments

Bioclipse dialog to upload chemical
structures to an OpenTox repository.
Us chem- and bioinformaticians have it easy when it comes to Open Science. Sure, writing documentation, doing unit testing, etc, takes a lot of time, but testing some new idea is done easily. Yes, people got used to that, so trying to explain that doing it properly actually takes long (documentation, unit testing) can be rather hard.

Important for this is a platform that allows you to easy experiment. For many biologists this environment is R or Python. To me, with most of the libraries important to me written in Java, this is Groovy (e.g. see my Groovy Cheminformatics book) and Bioclipse (doi:10.1186/1471-2105-8-59). Sometimes these hacks grow to be full papers, like with what started with OpenTox support (doi:10.1186/1756-0500-4-487) which even paved (for me) the way to the eNanoMapper project!

But often these hacks are just for me personal, or at least initially. However, I have no excuse to not make this available to a wider audience too. Of course, the source code is easy, and I normally have even the smallest Bioclipse hack available somewhere on GitHub (look for bioclipse.* repositories). But it is getting even better, now that Arvid Berg (Bioclipse team) gave me the pointers to ensure you can install those hacks, taking advantage from Uppsala's build system.

So, from now on, I will blog how to install Bioclipse hacks I deem useful for a wider audience, starting with this post on my Wikidata/Linked Data Fragments hack I used to get more CAS registry number mappings to other identifiers.

Install Bioclipse 2.6.2
The first thing you need is Bioclipse 2.6.2. That's the beta release of Bioclipse, and required for my hacks. From this link you can download binary nightly builds for GNU/Linux, MS-Windows, and OS/X. For the first two 32 and 64 bit build are available. You may need to install Java and version 1.7 should do fine. Unpack the archive, and then start the Bioclipse executable. For example, on GNU/Linux:

  $ tar zxvf Bioclipse.2.6.2-beta.linux.gtk.x86.tar.gz
  $ cd Bioclipse.2.6.2-beta/
  $ ./bioclipse

Install the Linked Data Fragments manager
The default update site already has a lot of goodies you can play with. Just go to Install → New Feature.... That will give you a nice dialog like this one (which allows you to install the aforementioned Bioclipse-OpenTox feature):



But that update site doesn't normally have my quick hacks. This is where Arvid's pointers come in, which I hope to carefully reproduce here so that my readers can install other Bioclipse extensions too.

Step 1: enable the 'Advanced Mode'
The first step is to enable the 'Advanced Mode'; that is, unless you are advanced, forget about this. Fortunately, the fact that you haven't given up on reading my blog yet is a good indicated you are advanced. Go to the Window → Preferences menu and enable the 'Advanced Mode' in the dialog, as shown here:


When done, click Apply and close the dialog with OK.

Step 2: add an update site from the Uppsala build system
The first step enables you to add arbitrary new update sites, like update sites available from the Uppsala build system, by adding a new menu option. To add new update sites, use this new menu option and select Install → Software from update site...:


By clicking the Add button, you go this dialog where you should enter the update site information:


This dialog will become a recurrent thing in this series, though the content may change from time to time. The information you need to enter is (the name is not too important and can be something else that makes sense to you):

  1. Name: Bioclipse RDF Update Site
  2. Location: http://pele.farmbio.uu.se/jenkins/job/Bioclipse.rdf/lastSuccessfulBuild/artifact/site.p2/

After clicking OK in the above dialog, you will return to the Available Software dialog (shown earlier).

Step 3: installing the Linked Data Fragments Feature
The  Available Software dialog will now show a list of features available from the just added update site:


You can see the Linked Data Fragments Feature is now listed which you can select with the checkbox in front of the name (as shown above). The Next button will walk you through a few more pages in this dialog, providing information about dependencies and a page that requires you to accept the Open Source licenses involved. And at the end of these steps, it may require you to reboot Bioclipse.

Step 4: opening the JavaScript Console and verify the new extension is installed
Because the Linked Data Fragments Feature extends Bioclipse with a new, so-called manager (see doi:10.1186/1471-2105-10-397), we need to use the JavaScript Console (or Groovy Console, or Python Console, if you prefer those languages). Make sure the JavaScript Console is open, or do this via the menu Windows → Show View → JavaScript Console and type in the console view man ldf which should result in something like this:


You can also type man ldf.createStore to get a brief description of the method I used to get a Linked Data Fragments wrapper for Wikidata in my previous post, which is what you should reread next.

Have fun and looking forward to hear how you use Linked Data Fragments with Bioclipse!

Chemistry Central and the ORCID identifier

If you are a scientist you have heard about the ORCID identifier by now. If not, you have been focusing on groundbreaking research and isolated yourself from the rest of the world, just to make it perfect and get that Nobel prize next year. If you have been working on impactful research, Nobel prize-worthy, and have been blogging and tweeting about your progress, as a good Open Scholar, you know ORCID is the DOI for "research contributors" and you already have one yourself, and probably also that T-shirt with your own identifier. Mine is 0000-0001-7542-0286, and almost 1.3M other authors got one too. The list of ORCIDs on Wikipedia is growing (and Wikidata), thanks to Andy Mabbett, whom also made it possible to add your ORCID on WikiPathways.

Anyway, what I was pleased to see today that you can now log in with your ORCID identifier with the Chemistry Central article submission system (notice the green icon):


Many other publishers allow logging in with your ORCID too, which benefits many:

  1. authors who just enter a list of ORCID identifiers, instead of a long list of author names and affiliations
  2. publishers, which have a simpler submission system and get more accurate information about submitters
  3. funding agencies which can more easily track what is done with the research funding
  4. research institutes which can more easily track what their employees are studying
Don't have one yet? Get your very own ORCID here.

CC-BY with the ACS Author Choice: CDK and Blue Obelisk papers liberated

Screenshot of an old CDK-based
JChemPaint, from the first CDK paper.
CC-BY :)
Already a while ago, the American Chemical Society (ACS) decided to allow the Creative Commons Attribution license (version 4.0) to be used on their papers, via their Author Choice program. ACS members pay $1500, which is low for a traditional publisher. While I even rather seem them move to a gold Open Access journal, it is a very welcome option! For the ACS business model it means a guaranteed sell of some 40 copies of this paper (at about $35 dollar each), because it will not immediately affect the sale of the full journal (much). Some papers may sell more than that had the paper remained closed access, but many for papers that sounds like a smart move money wise. Of course, they also buy themselves some goodwill and green Open Access is just around the corner anyway.

Better, perhaps, is that you can also use this option to make a past paper Open Access under a CC-BY license! And that is exactly what Christoph Steinbeck did with five of his papers, including two on which I am co-author. And these are not the least papers either. The first is the first CDK paper from 2003 (doi:10.1021/ci050400b), which featured a screenshot of JChemPaint shown above. Note that in those days, the print journal was still the target, so the screenshot is in gray scale :) BTW, given that this paper is cited 329 times (according to ImpactStory), maybe the ACS could have sold more than 40 copies. But for me, it means that finally people can read this paper about Open Science in chemistry, even after so many years. BTW, there is little chance the second CDK paper will be freed in a similar way.

The second paper that was liberated this way, is the first Blue Obelisk paper (doi:10.1021/ci050400b), which was cited 276 times (see ImpactStory):


This screenshot nicely shows how readers can see the CC-BY license for this paper. Note that it also lists that the copyright is with the ACS, which is correct, because in those days you commonly gave away your copyright to the publisher (I have stopped doing this, bar some unfortunate recent exceptions).

So, head over to your email client and email support@services.acs.org and let them know you also want your JCICS/JCIM paper available under a CC-BY license! No excuse anymore to make your seminal work in cheminformatics not available as gold Open Access!

Of course, submitting your new work to the Journal of Cheminformatics is cheaper and has the advantage that all papers are Open Access!

Ambit.js 0.0.2 release: a scatterplot

Two weeks ago I made a second release of ambit.js, a small project to show the power of the eNanoMapper API (doi:10.5281/zenodo.16517). It still is a client library to use the API from JavaScript and a few weeks ago I posted a few screenshots. This post is aimed at announcing the 0.0.2 release which doesn't change a lot since the previous version, but now features online documentation:


It also has a number of online examples, which include the code behind the screenshots in the earlier post, but also a new scatterplot example (still using d3.js):


Now, this scatter plot shows basically that the data we are looking at does not show a particular correlation, but that was not really the hypothesis here anyway. However, it does show how hypotheses can be tested with the API and scatter plots.

Getting CAS registry numbers out of WikiData

doi 10.15200/winn.142867.72538

I have promised my Twitter followers the SPARQL query you have all been waiting for. Sadly, you had to wait for it for more than two months. I'm sorry about that. But, here it is:
    PREFIX wd: <http://www.wikidata.org/entity/>

    SELECT ?compound ?id WHERE {
      ?compound wd:P231s [ wd:P231v ?id ] .
    }
What this query does is ask for all things (let's call whatever is behind the identifier is a "compound"; of course, it can be mixtures, ill-defined chemicals, nanomaterials, etc) that have a CAS registry identifier. This query results in a nice table of Wikidata identifiers (e.g. Q47512 is acetic acid) and matching CAS numbers, 16298 of them.

Because Wikidata is not specific to the English Wikipedia, CAS numbers from other origin will show up too. For example, the CAS number for N-benzylacrylamide (Q10334928) is provided by the Portuguese Wikipedia:


I used Peter Ertl's cheminfo.org (doi:10.1186/s13321-015-0061-y) to confirm this compound indeed does not have an English page, which is somewhat surprising.

The SPARQL query uses a predicate specifically for the CAS registry number (P231). Other identifiers have similar predicates, like for PubChem compound (P662) and Chemspider (P661). That means, Wikidata can become a community crowdsource of identifier mappings, which is one of the things Daniel Mietchen, me, and a few others proposed in this H2020 grant application (doi:10.5281/zenodo.13906). The SPARQL query is run by the Linked Data Fragments platform, which you should really check out too, using the Bioclipse manager I wrote around that.

The full Bioclipse script looks like:
    wikidataldf = ldf.createStore(
      "http://data.wikidataldf.com/wikidata"
    )

    // P231 CAS
    identifier = "P231"
    type = "cas"

    sparql = """
    PREFIX wd:

    SELECT ?compound ?id WHERE {
      ?compound wd:${identifier}s [ wd:${identifier}v ?id ] .
    }
    """
    mappings = rdf.sparql(wikidataldf, sparql)

    // recreate an empty output file
    outFilename = "/Wikidata/${type}2wikidata.csv"
    if (ui.fileExists(outFilename)) {
      ui.remove(outFilename)
      ui.newFile(outFilename)
    }

    // safe to a file
    for (i=1; i<=mappings.rowCount; i++) {
      wdID = mappings.get(i, "compound").substring(3)
      ui.append(
        outFilename,
        wdID + "," + mappings.get(i, "id") + "\n"
      )
    }
BTW, of course, all this depends on work by many others including the core RDF generation with the Wikidata Toolkit. See also the paper by Erxleben et al. (PDF).


Erxleben, F., Günther, M., Krötzsch, M., Mendez, J., Vrandečić, D., 2014. Introducing wikidata to the linked data web. In: Mika, P., Tudorache, T., Bernstein, A., Welty, C., Knoblock, C., Vrandečić, D., Groth, P., Noy, N., Janowicz, K., Goble, C. (Eds.), The Semantic Web – ISWC 2014. Vol. 8796 of Lecture Notes in Computer Science. Springer International Publishing, pp. 50-65. URL http://dx.doi.org/10.1007/978-3-319-11964-9_4

Mietchen, D., Others, M., Anonymous, Hagedorn, G., Jan. 2015. Enabling open science: Wikidata for research. URL http://dx.doi.org/10.5281/zenodo.13906

Ertl, P., Patiny, L., Sander, T., Rufener, C., Zasso, M., Mar. 2015. Wikipedia chemical structure explorer: substructure and similarity searching of molecules from Wikipedia. Journal of Cheminformatics 7 (1), 10+. URL http://dx.doi.org/10.1186/s13321-015-0061-y

"What You're Doing Is Rather Desperate"

Self-protrait by Gustave Courbet.
Source: WikiMedia Commons.
One blog that I have been (occasionally but repeatedly) reading for a long time is the What You're Doing Is Rather Desperate blog by Neil Saunders. HT to WoW!ter for pointing me to this nice post where Saunders shows how to calculate the number of entries in PubMed marked as retracted (in two seperate ways). The spoiler (you should check his R scripts anyway!): about 3900 retractions.

I have been quite interested in this idea of retractions and recent discussions with Chris (Christopher: mostly offline this time, and even some over a beer :) about if retractions are good or bad (BTW, check his inaugural speech on YouTube). He points out that retractions are not always for the right reasons, and, probably worse, have unintended consequences. An example he gave is a more senior researcher with two positions; in one lab someone misbehaved and this lab did not see any misconduct of this senior researcher; however, his other affiliation did not agree and fired him.

Five years ago I would have sad any senior researcher on a paper should still understand things in detail, and if misconduct was found, the senior authors are to blame too. I still believe this is the case, that's what you're co-researcher for, but feeling the pressure of publishing enough and just not having enough time, I do realize I cannot personally reproduce all results my post-docs and collaborators do. But we all know that publishing has made a wrong turn and, yes, I am trying to make it return to a better default.

Why I am interested in retractions
But that is not why I wanted to blog and discuss Saunders post. Instead, I want to explain why I am interested in retractions and, another blog you should check out, Retraction Watch. In fact, I am very much looking forward to their database! However, this is not because of the blame game. Not at all.

Instead, I am interested in noise in knowledge. Obviously, because this greatly affects my chemblaics research. Particularly, I like to reduce noise or at the very least take appropriate measures when doing statistics as we have plenty of means to deal with noise (like cancelling it out). Now, are retractions then a appropriate means to find incorrect, incomplete, or just outright false knowledge? No. But there is nothing better.

There are better approached: I have long and still am advocating the Citation Typing Ontology, though I have to admit I am not up to date with David Shotton's work. The CiTO allows to annotate if two papers disagree or if it agrees and perhaps even uses the knowledge. It can also annotate the citation as merely being included because the cited paper has some authority (expect many of those to Nature and Science papers).

But we have a long way to go before using CiTO becomes a reality. If interested, please check out the CiteULike support and Shotton's OpenCitations.

What does this mean for databases?
Research depends on data, some you measure, some you get from literature and increasingly databases. The latter, for example, to compare your own results with other findings. It is indeed helpful that databases provide these two functions:
  1. provide means to find similar experiments
  2. provide a gold standard of true knowledge
These database will have two different approaches: the first will present the data as reported or even better as raw data (as it came from the machine, unprocessed, though increasingly the machines already do processing to the best of its knowledge); the second will filter out true facts, possibly normalizing data along the way, e.g. by correcting obvious typing and drawing errors.

Indeed, database can combine these features, just like PubChem and ChemSpider do for small compounds. PubChem has the explicit approach of providing both the raw input from sources (the substances with SIDs) and the normalized result (the compounds with CIDs).


But what if the original paper turned out the be wrong? There are (at least) two phases:
  1. can we detect when a paper turns out wrong?
  2. can we propagate this knowledge into databases?
The first clearly reflects my interest in CiTO and retractions. We must develop means to filter out all the reported facts that turn out to be incorrect. And, can we efficiently keep our thousands of databases clean (many valid approaches!)? Do retractions matter here? Yes, because research in so-called higher impact journals is also seeing more retractions (whatever the reason is for that correlation), see this post by Bjoern Brembs


Where we must be heading
What the community needs to develop in the next few years is approaches for propagation of knowledge about correct and incorrect knowledge. That is, high impact knowledge must enter databases quickly, e.g. the exercise myokine irisin, but also the fact that it was recently shown it very likely doesn't exist, or at least that the original paper most likely measured something else (doi:10.1038/srep08889). Now, this is clearly a high-profile "retraction" of facts that few of us will have missed. Right? And that's where the problem is, the very long tail of disproven knowledge is very long, and we cannot rely on such facts to propagate quickly if we do not use tools to help us. This is one reason why my attention turned to semantic technologies, so that contradictions can be found more easily.

But I am getting rather desperate about all the badly annotated knowledge in databases, and I am also getting desperate about being able to make a change. The research I'm doing may turn out rather desperate.

Ambit.js: JavaScript client library for the eNanoMapper API technical preview

eNanoMapper is passed its first year, and an interesting year it has been! I enjoyed the collaboration with all partners very much and also the Open character of it. Just check our GitHub repository or our Jenkins build server.

Just this month, the NanoWiki data on Figshare was released and I just got around to uploading ambit.js to GitHub. This library is something in development, and should too be considered a technical preview. This JavaScript client library, inspired by Ian Dunlop's ops.js for Open PHACTS, allows visualization of data in the repository in arbitrary web pages, using jQuery and d3.js.

The visualization on the right shows the distribution of nanomaterial types in the preview server at http://data.enanomapper.net/ (based on AMBIT and the OpenTox API), containing various data sets (others by IDEA and NTUA), including the above mentioned NanoWiki knowledge base that I started in Prof. Fadeel's group at Karolinska Institutet. This set makes up about 3/4th of all data, and effectively excludes the orange 'nanoparticle' and the blue section due north. You can see I collected mostly data for metal oxides and some carbon nanotubes (though I did not digitize a lot of biological data for those).

But pie charts only work on pi days so let's quickly look at another application: summarize the size distribution:


Or what about the zeta potentials? Yes, the suggestion to make it a scatter plot with the pH when the potential was mentioned is already under consideration :)


Do you want to learn more? Get in contact!

Google Code closing down: semanticchemistry and chemojava

Google Code is closing down. I had a few projects running there and participated in semanticchemistry which hosts the CHEMINF ontology (doi:10.1371/journal.pone.0025513). But also ChemoJava, an extension of the CDK with GPL-licensed bits.

Fortunately, they have an exporter which automates the migration of the project to GitHub. And this is now in progress. Because the exporter is seeing a heavy load, they warn about a export times of up to twelve hours! The "0% complete" is promising, however :)

For the semanticchemistry project, I have asked the other people involved where we want to host it, as GitHub is just one of the options. A copy does not hurt anyway, but the one I am currently making may very well not be the new project location.

PubMed Commons
When you migrate your own projects and you published your work referring to this Google Code project page, please also leave a comment on PubMed Commons pointing readers to the new project page!

Migrating to CDK 1.5: SSSR and two other ring sets

I arrived a bit earlier at the year 1 eNanoMapper meeting so that I could sit down with Nina Jealiazkova to work on migrating AMBIT2 to CDK 1.5. One of the big new things is the detection of ring systems, which John gave a major overhaul. Part of that is that the SSSRFinder is now retired into the legacy model. Myself, I haven't even gone into all the new goodies of this, but one of the things we needed to look at, is how to replace the use the SSSRFinder use for finding the SSSR set. So, I just had a check if I understood the replacement code, by using the SSSRFinderTest and replacing the code in those tests with the new code, and all seemed to work as I expect it to do.

So, here are a few examples of code snippets you need to replace. Previously, your code would have something like:

IRingSet ringSet = new SSSRFinder(someAtomContainer).findSSSR();

This code is replaced with:

IRingSet ringSet = Cycles.sssr(someAtomContainer).toRingSet();

Similarly, you could have had something like:

IRingSet ringSetEssential =
  new SSSRFinder(buckyball).findEssentialRings();
IRingSet ringSetRelevant =
  new SSSRFinder(buckyball).findRelevantRings();

This is changed to:

IRingSet ringSetEssential = Cycles.essential(buckyball).toRingSet();
IRingSet ringSetRelevant = Cycles.relevant(buckyball).toRingSet();

Programming in the Life Sciences #21: 2014 Screenshots #1

December saw the end of this year's PRA3006 course (aka #mcspils). Time to blog some screenshots of the student projects. Like last year, the aim is to use the Open PHACTS API to collect data with ops.js and which should then be visualized in a HTML page, preferably with d3.js. This year, all projects reached that goal.

ACE inhibitors
The first team (Mischa-Alexander and Hamza) focused on the ACE inhibitors (type:"drug class") and the WP554 from WikiPathways. The use a tree structure to list inhibitors along with their activity:


The source code for this project is available under a MIT license.

Diabetes
The second team (Catherine and Moritz) looked at compounds hitting diabetes mellitus targets. They take advantage from the new disease API methods and first ask for all targets for the disease, and then query for all compounds. Mind you, the compounds are not filtered by activity, so it mostly shows interactions that real targets.


This product too is available with the MIT license.

Tuberculosis
The third project (Nadia en Loic) also goes from disease to targets and they looked at tuberculosis.

Asynchronous calls
If you know the ops.js, d3.js, and JavaScript a bit, you know that these projects are not trivial. The remote web service calls are made in an asynchronous manner: each call comes with a callback function that gets called when the server returns an answer, at some future point in time. Therefore, if you want to visualization, for example, compounds with activities against targets for a particular disease, you need two web service calls, with the second made in the callback function of the first call. Now, try to globally collect the data from that with JavaScript and HTML, and make sure to call the visualization call when all information is collected! But even without that, the students need to convert the returned web service answer into a format that d3.js can handle. In short: quite a challenge for six practical days!