Small molecule penalized logP optimization

Penalized LogP Optimization

This notebook shows how to optimize a generative model with respect to Penalized LogP score. This is a standard benchmark in many generative design papers

Performance Notes

The workflow in this notebook is more CPU-constrained than GPU-constrained due to the need to evaluate samples on CPU. If you have a multi-core machine, it is recommended that you uncomment and run the set_global_pool cells in the notebook. This will trigger the use of multiprocessing, which will result in 2-4x speedups.

This notebook may run slow on Collab due to CPU limitations.

If running on Collab, remember to change the runtime to GPU

import sys
sys.path.append('..')

from mrl.imports import *
from mrl.core import *
from mrl.chem import *
from mrl.templates.all import *

from mrl.torch_imports import *
from mrl.torch_core import *
from mrl.layers import *
from mrl.dataloaders import *
from mrl.g_models.all import *
from mrl.vocab import *
from mrl.policy_gradient import *
from mrl.train.all import *
from mrl.model_zoo import *
from mrl.combichem import *
/home/dmai/miniconda3/envs/mrl/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::shared_ptr<RDKit::FilterCatalogEntry const> already registered; second conversion method ignored.
  return f(*args, **kwds)

Agent

Here we create the model we want to optimize. We will use the LSTM_LM_Small_ZINC_NC - a LSTM-based language model pretrained on part of the ZINC database without chirality. Training without chirality prevents a form of mode collapse where the model converges to generating different isomers of the same compound.

Whether or not to use a model trained with or without chirality depends on the reward function you are trying to optimize. You should use a model with chirality if your reward function handles chirality in a meaningful way. Specically this means your reward function should give different scores to different isomers. This difference should relate to a real aspect of the propety predicted (ie affinity of different isomers) rather than being a spurious feature learned by the model (this happens surprisingly often).

Penalized LogP score isn't impacted by chirality, so using a non-chiral model makes sense. To be technical, Penalized LogP includes a SA score component which is influenced by the number of stereocenters in a molecule, but this does not result in different isomers getting different scores.

agent = LSTM_LM_Small_ZINC_NC(drop_scale=0.3, opt_kwargs={'lr':5e-5})

Template

Here we create our template.

We set the following hard filters:

We set the following soft filters:

  • PenalizedLogPFilter: evaluates the Penalized LogP score of a compound. By passing score=PassThroughScore(), this filter simply returns the Penalized LogP score
template = Template([ValidityFilter(), 
                     SingleCompoundFilter(), 
                     ],
                    [PenalizedLogPFilter(None, None, score=PassThroughScore())], 
                    fail_score=-1., log=False)

template_cb = TemplateCallback(template, prefilter=True)

Reward

We are only optimizing towards Penaized LogP, which is contained in our template. For this reason, we don't have any additional score terms

Loss Function

We will use the PPO policy gradient algorithm

pg = PPO(0.99,
        0.5,
        lam=0.95,
        v_coef=0.5,
        cliprange=0.3,
        v_cliprange=0.3,
        ent_coef=0.01,
        kl_target=0.03,
        kl_horizon=3000,
        scale_rewards=True)

loss = PolicyLoss(pg, 'PPO', 
                   value_head=ValueHead(256), 
                   v_update_iter=2, 
                   vopt_kwargs={'lr':1e-3})

Samplers

We create the following samplers:

  • sampler1 ModelSampler: this samples from the main model
  • sampler2 ModelSampler: this samples from the baseline model
  • sampler3 LogSampler: this samples high scoring samples from the log
  • sampler4 CombichemSampler: this sampler runs combichem generation on the top scoring compounds. The combination of generative models with combichem greatly accelerates finding high scoring compounds
gen_bs = 1500

sampler1 = ModelSampler(agent.vocab, agent.model, 'live', 400, 0., gen_bs)
sampler2 = ModelSampler(agent.vocab, agent.base_model, 'base', 400, 0., gen_bs)
sampler3 = LogSampler('samples', 'rewards', 10, 98, 100)

mutators = [
    ChangeAtom(['6', '7', '8', '9']),
    AppendAtomSingle(['C', 'N', 'O', 'F', 'Cl', 'Br']),
    AppendAtomsDouble(['C', 'N', 'O']),
    AppendAtomsTriple(),
    DeleteAtom(),
    ChangeBond(),
    InsertAtomSingle(['C', 'N', 'O']),
    InsertAtomDouble(['C', 'N']),
    InsertAtomTriple(),
    AddRing(),
    ShuffleNitrogen(10)
]

mc = MutatorCollection(mutators)

crossovers = [FragmentCrossover()]

cbc = CombiChem(mc, crossovers, template=template, rewards=[],
                prune_percentile=70, max_library_size=400, log=True, p_explore=0.2)

sampler4 = CombichemSampler(cbc, 20, 98, 0.2, 1, 'rewards', 'combichem')

samplers = [sampler1, sampler2, sampler3, sampler4]

Other Callbacks

We add the following callbacks:

  • supervised_cb: every 20 batches, this callback grabs the top 2% of samples from the log and runs supervised training with these samples
  • live_max: prints the maximum score from sampler1 each batch
  • live_p90: prints the top 10% score from sampler1 each batch
supervised_cb = SupervisedCB(agent, 20, 0.5, 98, 5e-4, 64, epochs=5)
live_max = MaxCallback('rewards', 'live')
live_p90 = PercentileCallback('rewards', 'live', 90)

cbs = [supervised_cb, live_p90, live_max]

Environment

We create our environment with the objects assembled so far

env = Environment(agent, template_cb, samplers=samplers, rewards=[], losses=[loss],
                 cbs=cbs)

Train

set_global_pool(min(10, os.cpu_count()))
env.fit(128, 150, 400, 25)
iterations rewards rewards_final new diversity bs template valid PPO rewards_live_mean rewards_live_p90 rewards_live_max
0 -0.155 -0.155 1.000 1.000 128 -0.155 1.000 0.568 -0.045 1.719 3.069
25 2.360 2.360 0.969 1.000 128 2.360 1.000 2.917 -0.422 1.189 1.855
50 3.822 3.822 0.906 1.000 128 3.822 1.000 4.236 -0.075 1.593 3.046
75 4.450 4.450 0.945 1.000 128 4.450 1.000 6.482 0.182 2.067 2.213
100 2.236 2.236 0.891 1.000 128 2.236 1.000 3.320 0.386 2.171 3.378
125 4.694 4.694 0.930 1.000 128 4.694 1.000 3.907 1.276 3.800 5.314
150 4.725 4.725 0.891 1.000 128 4.725 1.000 4.318 0.880 3.107 4.361
175 5.200 5.200 0.844 1.000 128 5.200 1.000 4.621 0.825 3.000 7.494
200 6.940 6.940 0.922 1.000 128 6.940 1.000 4.232 1.547 4.185 5.956
225 6.633 6.633 0.914 1.000 128 6.633 1.000 4.069 2.259 4.571 5.551
250 7.861 7.861 0.922 1.000 128 7.861 1.000 3.612 2.398 5.208 9.807
275 8.819 8.819 0.859 1.000 128 8.819 1.000 3.965 2.082 5.753 9.320
300 10.725 10.725 0.930 1.000 128 10.725 1.000 4.172 2.041 5.105 6.869
325 14.188 14.188 0.891 1.000 128 14.188 1.000 4.515 3.963 7.501 11.520
350 10.278 10.278 0.898 1.000 128 10.278 1.000 3.869 3.321 5.834 14.184
375 11.245 11.245 0.852 1.000 128 11.245 1.000 3.467 3.736 7.682 12.305
env.log.plot_metrics()
subset = env.log.df[env.log.df.rewards>23.6]
draw_mols(to_mols(subset.samples.values), legends=[f"{i:.5f}" for i in subset.rewards.values])

Note that penalized LogP is strongly influenced by the size of the molecule generated. For this reason it's not a very good benchmark. Nonetheless it is very common in literature. Increasing the maximum sequence length from 150 (what we used in Environment.fit above) to something like 200 will result in higher penalized logP scores for the same amount of training