MedChemExpress GHRH (1-29) Earchgroups/birte-hoecker/algorithms-and-software.html.Benchmark SetWe compiled a set of twelve proteins with structural and experimental affinity data for the assessment of computational design methods for protein-ligand binding. For this, we systematically searched the PDBbind database [34], which lists high quality crystal structures of protein-ligand complexes together with experimentally determined binding data. Each protein in our set has at least two mutational variants (usually the wild type and one or more mutants) accompanied by an affinity measure (the inhibitory constant Ki or dissociation constant Kd ) for the same ligand. The positions of amino acids that differ between the variants are always located in the binding pocket or active site. For each protein, there is at least one crystal structure of a variant with the ligand, for ten of the twelve there are two or more crystal structures that allow us to compare a design model of a variant with the respective crystal structure. The proteins and ligands in our benchmark set are very diverse. All ligands are shown in Figure 2. Each protein in the set belongs to a different fold as defined by SCOP [35], underscoring their structural diversity. This diversity allows to test design methods on a wide range of problems and avoids bias. Table 1 lists the benchmark proteins and their associated data.favorably with the ligand. The observed differences in ligand pose RMSD are not statistically significant (Figure 3). To assess whether the methods can differentiate correctly between protein variants that have a large affinity difference, we looked at pairs that have an affinity difference of at least 50-fold. This cutoff translates to roughly 2.3 kcal/mole and was chosen to make sure that only pairs with clear, trustworthy affinity differences well 23977191 outside experimental error are investigated. Table 2 lists the number of pairs in which the order of the mutants according to energy score is the same as the order according to affinity, meaning the design method would produce the correct ranking. Here, POCKETOPTIMIZER performs in the same range as ROSETTA, with 69 correctly predicted pairs opposed to 64 . When comparing the two receptor-ligand score functions we used in our approach it seems that Autodock Vina has some MedChemExpress Mirin advantage over the CADDSuite score. The total scores of the different methods are also listed. Based on these scores POCKETOPTIMIZER performs even better with 71 and 76 correctly predicted pairs. However, since we are looking at affinity prediction, the binding score appears to be more appropriate for the comparison. We further examined how well the energy scores correlate with the affinities. For this we plotted the predicted energy of each design against the logarithmic affinities for all seven test cases with more than two mutations (Figure 4). The scores should correspond to the binding free energy, which in turn is proportional to the logarithm of the affinity of binding. Here, all mutants with experimental affinity values of a test case are included, regardless of the extent of the affinity difference. Overall we find that the energy values follow the affinity logarithm only in some cases.Discussion of Benchmark ResultsWhen looking at a pair of protein variants, POCKETOPTIMIZER is able to correctly predict which variant has a better binding affinity if that difference is based on the introduction or abolition of a direct interaction of the mutable residue’s side chai.Earchgroups/birte-hoecker/algorithms-and-software.html.Benchmark SetWe compiled a set of twelve proteins with structural and experimental affinity data for the assessment of computational design methods for protein-ligand binding. For this, we systematically searched the PDBbind database [34], which lists high quality crystal structures of protein-ligand complexes together with experimentally determined binding data. Each protein in our set has at least two mutational variants (usually the wild type and one or more mutants) accompanied by an affinity measure (the inhibitory constant Ki or dissociation constant Kd ) for the same ligand. The positions of amino acids that differ between the variants are always located in the binding pocket or active site. For each protein, there is at least one crystal structure of a variant with the ligand, for ten of the twelve there are two or more crystal structures that allow us to compare a design model of a variant with the respective crystal structure. The proteins and ligands in our benchmark set are very diverse. All ligands are shown in Figure 2. Each protein in the set belongs to a different fold as defined by SCOP [35], underscoring their structural diversity. This diversity allows to test design methods on a wide range of problems and avoids bias. Table 1 lists the benchmark proteins and their associated data.favorably with the ligand. The observed differences in ligand pose RMSD are not statistically significant (Figure 3). To assess whether the methods can differentiate correctly between protein variants that have a large affinity difference, we looked at pairs that have an affinity difference of at least 50-fold. This cutoff translates to roughly 2.3 kcal/mole and was chosen to make sure that only pairs with clear, trustworthy affinity differences well 23977191 outside experimental error are investigated. Table 2 lists the number of pairs in which the order of the mutants according to energy score is the same as the order according to affinity, meaning the design method would produce the correct ranking. Here, POCKETOPTIMIZER performs in the same range as ROSETTA, with 69 correctly predicted pairs opposed to 64 . When comparing the two receptor-ligand score functions we used in our approach it seems that Autodock Vina has some advantage over the CADDSuite score. The total scores of the different methods are also listed. Based on these scores POCKETOPTIMIZER performs even better with 71 and 76 correctly predicted pairs. However, since we are looking at affinity prediction, the binding score appears to be more appropriate for the comparison. We further examined how well the energy scores correlate with the affinities. For this we plotted the predicted energy of each design against the logarithmic affinities for all seven test cases with more than two mutations (Figure 4). The scores should correspond to the binding free energy, which in turn is proportional to the logarithm of the affinity of binding. Here, all mutants with experimental affinity values of a test case are included, regardless of the extent of the affinity difference. Overall we find that the energy values follow the affinity logarithm only in some cases.Discussion of Benchmark ResultsWhen looking at a pair of protein variants, POCKETOPTIMIZER is able to correctly predict which variant has a better binding affinity if that difference is based on the introduction or abolition of a direct interaction of the mutable residue’s side chai.