Below, we lay out these considerations for all 14 technology trends, so that you can better understand them and consider how they relate to your organization. At the bottom of this page, you can download detailed profiles of all these trends or download a customized PDF containing profiles of the trends you choose.
The BJCP Style Guidelines are maintained by the BJCP Style Guidelines Committee, currently chaired by BJCP President Emeritus Gordon Strong. This page provides links to download or access our style guidelines in various formats. You can also view the guidelines on the web via the links in the navigation menu, which reflect the 2021 version.
Students are authorized to use technological resources of the San Diego County Office of Education in accordance with user obligations and responsibilities specified below. In effect, students may not violate any Federal, State or local laws or use the SDCOE / JCCS network for any illegal activity, including the unlawful use of copyrighted works, plagiarism and unlawful downloading of files. Specifically:
To address these problems, we developed AFproject ( ), a publicly available web-based service for comprehensive and unbiased evaluation of AF tools. The service is based on eight well-established and widely used reference sequence data sets as well as four new data sets. It can be used to comprehensively evaluate AF methods under five different sequence analysis scenarios: protein sequence classification, gene tree inference, regulatory sequence identification, genome-based phylogenetics, and HGT (Table 2). To evaluate the existing AF methods with these data sets, we asked the developers of 24 AF tools to run their software on our data sets or to recommend suitable input parameter values appropriate for each data set. In total, our study involved 10,202 program runs, resulting in 1,020,493,359 pairwise sequence comparisons (Table 1; Additional file 1: Table S1). All benchmarking results are stored and can be downloaded, reproduced, and inspected with the AFproject website. Thus, any future evaluation results can be seamlessly compared to the existing ones obtained using the same reference data sets with precisely defined software parameters. By providing a way to automatically include new methods and to disseminate their results publicly, we aim to maintain an up-to-date and comprehensive assessment of state-of-the-art AF tools, allowing contributions and continuous updates by all developers of AF-based methods.
To automate AF method benchmarking with a wide range of reference data sets, we developed a publicly available web-based evaluation framework (Fig. 1). Using this workflow, an AF method developer who wants to evaluate their own algorithm first downloads sequence data sets from one or more of the five categories (e.g., data set of protein sequences with low identity from the protein sequence classification category) from the server. The developer then uses the downloaded data set to calculate pairwise AF distances or dissimilarity scores between the sequences of the selected data sets. The benchmarking service accepts the resulting pairwise distances in tab-separated value (TSV) format or as a matrix of pairwise distances in standard PHYLIP format. In addition, benchmarking procedures in two categories (genome-based phylogeny and horizontal gene transfer) also support trees in Newick format to allow for further comparative analysis of tree topologies.
To evaluate the performance of currently available AF tools and create a reference data set for future comparisons, we benchmarked 24 standalone tools (Table 1), covering a large proportion of the currently available AF methods. Some tools offer multiple related methods to measure pairwise distances (or dissimilarity) between sequences; for instance, jD2Stat  supports three different distance measures based on the D2 statistic: jD2Stat--d2n, jD2Stat--d2s, and jD2Stat--d2st. In this study, we included these different distance measures, resulting in a total of 74 tested tool variants (Fig. 2). Each of these tool variants was run with various combinations of parameter values (Additional file 1: Table S1). The values yielding the best performance for a given method were selected and saved in the AFproject database; if multiple parameters produced the same best-performing results for a tool, we selected only the values that were least computationally demanding (e.g., the shortest word length for word-counting methods or the smallest sketch size). Full information about the benchmarking results, including all combinations of parameter values of the evaluated tools, can be downloaded from .
All data sets and results discussed in the paper are freely available from our website ( ) through the download page ( ) . The specific reference data sets are available either through the dedicated pages (e.g., ) or can be downloaded as a single package through: -reference_datasets190511.zip. The source code of the AFproject service is available under an open source license (Mozilla Public License Version 2.0) at -org/afproject . 2b1af7f3a8