PubTransformer

A site to transform Pubmed publications into these bibliographic reference formats: ADS, BibTeX, EndNote, ISI used by the Web of Knowledge, RIS, MEDLINE, Microsoft's Word 2007 XML.

Algorithms - Top 30 Publications

Learning from label proportions in brain-computer interfaces: Online unsupervised learning with guarantees.

Using traditional approaches, a brain-computer interface (BCI) requires the collection of calibration data for new subjects prior to online use. Calibration time can be reduced or eliminated e.g., by subject-to-subject transfer of a pre-trained classifier or unsupervised adaptive classification methods which learn from scratch and adapt over time. While such heuristics work well in practice, none of them can provide theoretical guarantees. Our objective is to modify an event-related potential (ERP) paradigm to work in unison with the machine learning decoder, and thus to achieve a reliable unsupervised calibrationless decoding with a guarantee to recover the true class means.

Continuous uniformly finite time exact disturbance observer based control for fixed-time stabilization of nonlinear systems with mismatched disturbances.

This paper presents a continuous composite control scheme to achieve fixed-time stabilization for nonlinear systems with mismatched disturbances. The composite controller is constructed in two steps: First, uniformly finite time exact disturbance observers are proposed to estimate and compensate the disturbances. Then, based on adding a power integrator technique and fixed-time stability theory, continuous fixed-time stable state feedback controller and Lyapunov functions are constructed to achieve global fixed-time system stabilization. The proposed control method extends the existing fixed-time stable control results to high order nonlinear systems with mismatched disturbances and achieves global fixed-time system stabilization. Besides, the proposed control scheme improves the disturbance rejection performance and achieves performance recovery of nominal system. Simulation results are provided to show the effectiveness, the superiority and the applicability of the proposed control scheme.

Three-Dimensional Super-Resolution in Eukaryotic Cells Using the Double-Helix Point Spread Function.

Single-molecule localization microscopy, typically based on total internal reflection illumination, has taken our understanding of protein organization and dynamics in cells beyond the diffraction limit. However, biological systems exist in a complicated three-dimensional environment, which has required the development of new techniques, including the double-helix point spread function (DHPSF), to accurately visualize biological processes. The application of the DHPSF approach has so far been limited to the study of relatively small prokaryotic cells. By matching the refractive index of the objective lens immersion liquid to that of the sample media, we demonstrate DHPSF imaging of up to 15-μm-thick whole eukaryotic cell volumes in three to five imaging planes. We illustrate the capabilities of the DHPSF by exploring large-scale membrane reorganization in human T cells after receptor triggering, and by using single-particle tracking to image several mammalian proteins, including membrane, cytoplasmic, and nuclear proteins in T cells and embryonic stem cells.

Assessment of the Intraoperative Consultation Service Rendered by General Pathologists in a Scenario Where a Well-Defined Decision Algorithm Is Followed.

Intraoperative consultation (IOC) remains an area of general practice even within subspecialized pathology departments. This study assesses the IOCs rendered in a general pathology setting where surgeons integrate these results in a well-defined algorithm, developed with the input of specialized pathologists.

Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database.

Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack.

Ethanol mediated As(III) adsorption onto Zn-loaded pinecone biochar: Experimental investigation, modeling, and optimization using hybrid artificial neural network-genetic algorithm approach.

Organic matters (OMs) and their oxidization products often influence the fate and transport of heavy metals in the subsurface aqueous systems through interaction with the mineral surfaces. This study investigates the ethanol (EtOH)-mediated As(III) adsorption onto Zn-loaded pinecone (PC) biochar through batch experiments conducted under Box-Behnken design. The effect of EtOH on As(III) adsorption mechanism was quantitatively elucidated by fitting the experimental data using artificial neural network and quadratic modeling approaches. The quadratic model could describe the limiting nature of EtOH and pH on As(III) adsorption, whereas neural network revealed the stronger influence of EtOH (64.5%) followed by pH (20.75%) and As(III) concentration (14.75%) on the adsorption phenomena. Besides, the interaction among process variables indicated that EtOH enhances As(III) adsorption over a pH range of 2 to 7, possibly due to facilitation of ligand-metal(Zn) binding complexation mechanism. Eventually, hybrid response surface model-genetic algorithm (RSM-GA) approach predicted a better optimal solution than RSM, i.e., the adsorptive removal of As(III) (10.47μg/g) is facilitated at 30.22mg C/L of EtOH with initial As(III) concentration of 196.77μg/L at pH5.8. The implication of this investigation might help in understanding the application of biochar for removal of various As(III) species in the presence of OM.

ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing.

Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT).

Algorithms for survival: a comparative perspective on emotions.

The nature and neural implementation of emotions is the subject of vigorous debate. Here, we use Bayesian decision theory to address key complexities in this field and conceptualize emotions in terms of their relationship to survival-relevant behavioural choices. Decision theory indicates which behaviours are optimal in a given situation; however, the calculations required are radically intractable. We therefore conjecture that the brain uses a range of pre-programmed algorithms that provide approximate solutions. These solutions seem to produce specific behavioural manifestations of emotions and can also be associated with core affective dimensions. We identify principles according to which these algorithms are implemented in the brain and illustrate our approach by considering decision making in the face of proximal threat.

A critical analysis of computational protein design with sparse residue interaction graphs.

Protein design algorithms enumerate a combinatorial number of candidate structures to compute the Global Minimum Energy Conformation (GMEC). To efficiently find the GMEC, protein design algorithms must methodically reduce the conformational search space. By applying distance and energy cutoffs, the protein system to be designed can thus be represented using a sparse residue interaction graph, where the number of interacting residue pairs is less than all pairs of mutable residues, and the corresponding GMEC is called the sparse GMEC. However, ignoring some pairwise residue interactions can lead to a change in the energy, conformation, or sequence of the sparse GMEC vs. the original or the full GMEC. Despite the widespread use of sparse residue interaction graphs in protein design, the above mentioned effects of their use have not been previously analyzed. To analyze the costs and benefits of designing with sparse residue interaction graphs, we computed the GMECs for 136 different protein design problems both with and without distance and energy cutoffs, and compared their energies, conformations, and sequences. Our analysis shows that the differences between the GMECs depend critically on whether or not the design includes core, boundary, or surface residues. Moreover, neglecting long-range interactions can alter local interactions and introduce large sequence differences, both of which can result in significant structural and functional changes. Designs on proteins with experimentally measured thermostability show it is beneficial to compute both the full and the sparse GMEC accurately and efficiently. To this end, we show that a provable, ensemble-based algorithm can efficiently compute both GMECs by enumerating a small number of conformations, usually fewer than 1000. This provides a novel way to combine sparse residue interaction graphs with provable, ensemble-based algorithms to reap the benefits of sparse residue interaction graphs while avoiding their potential inaccuracies.

Random Deep Belief Networks for Recognizing Emotions from Speech Signals.

Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.

Toward a systematic exploration of nano-bio interactions.

Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven and efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability.

Texture Analysis of Abnormal Cell Images for Predicting the Continuum of Colorectal Cancer.

Abnormal cell (ABC) is a markedly heterogeneous tissue area and can be categorized into three main types: benign hyperplasia (BH), carcinoma (Ca), and intraepithelial neoplasia (IN) or precursor cancerous lesion. In this study, the goal is to determine and characterize the continuum of colorectal cancer by using a 3D-texture approach. ABC was segmented in preprocessing step using an active contour segmentation technique. Cell types were analyzed based on textural features extracted from the gray level cooccurrence matrices (GLCMs). Significant texture features were selected using an analysis of variance (ANOVA) of ABC with a p value cutoff of p < 0.01. Features selected were reduced with a principal component analysis (PCA), which accounted for 97% of the cumulative variance from significant features. The simulation results identified 158 significant features based on ANOVA from a total of 624 texture features extracted from GLCMs. Performance metrics of ABC discrimination based on significant texture features showed 92.59% classification accuracy, 100% sensitivity, and 94.44% specificity. These findings suggest that texture features extracted from GLCMs are sensitive enough to discriminate between the ABC types and offer the opportunity to predict cell characteristics of colorectal cancer.

Ranking of Sites for Installation of Hydropower Plant Using MLP Neural Network Trained with GA: A MADM Approach.

Every energy system which we consider is an entity by itself, defined by parameters which are interrelated according to some physical laws. In recent year tremendous importance is given in research on site selection in an imprecise environment. In this context, decision making for the suitable location of power plant installation site is an issue of relevance. Environmental impact assessment is often used as a legislative requirement in site selection for decades. The purpose of this current work is to develop a model for decision makers to rank or classify various power plant projects according to multiple criteria attributes such as air quality, water quality, cost of energy delivery, ecological impact, natural hazard, and project duration. The case study in the paper relates to the application of multilayer perceptron trained by genetic algorithm for ranking various power plant locations in India.

Evaluation of Modified 2-Tiered Serodiagnostic Testing Algorithms for Early Lyme Disease.

The conventional 2-tiered serologic testing protocol for Lyme disease (LD), an enzyme immunoassay (EIA) followed by immunoglobulin M and immunoglobulin G Western blots, performs well in late-stage LD but is insensitive in patients with erythema migrans (EM), the most common manifestation of the illness. Western blots are also complex, difficult to interpret, and relatively expensive. In an effort to improve test performance and simplify testing in early LD, we evaluated several modified 2-tiered testing (MTTT) protocols, which use 2 assays designed as first-tier tests sequentially, without the need of Western blots.

Identifying multiple myeloma patients using data from the French health insurance databases: Validation using a cancer registry.

This study aimed to assess the performance of several algorithms based on hospital diagnoses and the long-term diseases scheme to identify multiple myeloma patients.Potential multiple myeloma patients in 2010 to 2013 were identified using the presence of hospital records with at least 1 main diagnosis code for multiple myeloma (ICD-10 "C90"). Alternative algorithms also considered related and associated diagnoses, combination with long-term conditions, or at least 2 diagnoses. Incident patients were those with no previous "C90" codes in the past 24 or 12 months. The sensitivity, specificity, and positive and negative predictive values (PPVs and NPVs) were computed, using a French cancer registry for the corresponding area and period as the criterion standard.Long-term conditions data extracted concerned 11,559 patients (21,846 for hospital data). The registry contained 125 cases of multiple myeloma. Sensitivity was 70% when using only main hospital diagnoses (specificity 100%, PPV 79%), 76% when also considering related diagnoses (specificity 100%, PPV 74%), and 90% with associated diagnoses included (100% specificity, 64% PPV).In relation with their good performance, selected algorithms can be used to study the benefit and risk of drugs in treated multiple myeloma patients.

Mexican Hat Wavelet Kernel ELM for Multiclass Classification.

Kernel extreme learning machine (KELM) is a novel feedforward neural network, which is widely used in classification problems. To some extent, it solves the existing problems of the invalid nodes and the large computational complexity in ELM. However, the traditional KELM classifier usually has a low test accuracy when it faces multiclass classification problems. In order to solve the above problem, a new classifier, Mexican Hat wavelet KELM classifier, is proposed in this paper. The proposed classifier successfully improves the training accuracy and reduces the training time in the multiclass classification problems. Moreover, the validity of the Mexican Hat wavelet as a kernel function of ELM is rigorously proved. Experimental results on different data sets show that the performance of the proposed classifier is significantly superior to the compared classifiers.

Patch Based Multiple Instance Learning Algorithm for Object Tracking.

To deal with the problems of illumination changes or pose variations and serious partial occlusion, patch based multiple instance learning (P-MIL) algorithm is proposed. The algorithm divides an object into many blocks. Then, the online MIL algorithm is applied on each block for obtaining strong classifier. The algorithm takes account of both the average classification score and classification scores of all the blocks for detecting the object. In particular, compared with the whole object based MIL algorithm, the P-MIL algorithm detects the object according to the unoccluded patches when partial occlusion occurs. After detecting the object, the learning rates for updating weak classifiers' parameters are adaptively tuned. The classifier updating strategy avoids overupdating and underupdating the parameters. Finally, the proposed method is compared with other state-of-the-art algorithms on several classical videos. The experiment results illustrate that the proposed method performs well especially in case of illumination changes or pose variations and partial occlusion. Moreover, the algorithm realizes real-time object tracking.

A Method for Consensus Reaching in Product Kansei Evaluation Using Advanced Particle Swarm Optimization.

Consumers' opinions toward product design alternatives are often subjective and perceptual, which reflect their perception about a product and can be described using Kansei adjectives. Therefore, Kansei evaluation is often employed to determine consumers' preference. However, how to identify and improve the reliability of consumers' Kansei evaluation opinions toward design alternatives has an important role in adding additional insurance and reducing uncertainty to successful product design. To solve this problem, this study employs a consensus model to measure consistence among consumers' opinions, and an advanced particle swarm optimization (PSO) algorithm combined with Linearly Decreasing Inertia Weight (LDW) method is proposed for consensus reaching by minimizing adjustment of consumers' opinions. Furthermore, the process of the proposed method is presented and the details are illustrated using an example of electronic scooter design evaluation. The case study reveals that the proposed method is promising for reaching a consensus through searching optimal solutions by PSO and improving the reliability of consumers' evaluation opinions toward design alternatives according to Kansei indexes.

An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment.

The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications.

A Novel Graph Constructor for Semisupervised Discriminant Analysis: Combined Low-Rank and k-Nearest Neighbor Graph.

Semisupervised Discriminant Analysis (SDA) is a semisupervised dimensionality reduction algorithm, which can easily resolve the out-of-sample problem. Relative works usually focus on the geometric relationships of data points, which are not obvious, to enhance the performance of SDA. Different from these relative works, the regularized graph construction is researched here, which is important in the graph-based semisupervised learning methods. In this paper, we propose a novel graph for Semisupervised Discriminant Analysis, which is called combined low-rank and k-nearest neighbor (LRKNN) graph. In our LRKNN graph, we map the data to the LR feature space and then the kNN is adopted to satisfy the algorithmic requirements of SDA. Since the low-rank representation can capture the global structure and the k-nearest neighbor algorithm can maximally preserve the local geometrical structure of the data, the LRKNN graph can significantly improve the performance of SDA. Extensive experiments on several real-world databases show that the proposed LRKNN graph is an efficient graph constructor, which can largely outperform other commonly used baselines.

Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features.

As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases.

Applications of nonlocal means algorithm in low-dose X-ray CT image processing and reconstruction: A review.

Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail.

Ranking Support Vector Machine with Kernel Approximation.

Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

Early hybrid approach and enteral feeding algorithm could reduce the incidence of necrotising enterocolitis in neonates with ductus-dependent systemic circulation.

The reported incidence of necrotising enterocolitis in neonates with complex CHD with ductus-dependent systemic circulation ranges from 6.8 to 13% despite surgical treatment; the overall mortality is between 25 and 97%. The incidence of gastrointestinal complications after hybrid palliation for neonates with ductus-dependent systemic circulation still has to be defined, but seems comparable with that following the Norwood procedure.

The Correlation-Base-Selection Algorithm for Diagnostic Schizophrenia Based on Blood-Based Gene Expression Signatures.

Microarray analysis of gene expression is often used to diagnose different types of disease. Many studies report remarkable achievements in nervous system disease. Clinical diagnosis of schizophrenia (SCZ) still depends on doctors' experience, which is unreliable and needs to be more objective and quantified. To solve this problem, we collected whole blood gene expression data from four studies, including 152 individuals with schizophrenia (SCZ) and 138 normal controls in different regions. The correlation-based feature selection (CFS, one of the machine learning methods) algorithm was applied in this study, and 103 significantly differentially expressed genes between patients and controls, called "feature genes," were selected; then, a model for SCZ diagnosis was built. The samples were subdivided into 10 groups, and cross-validation showed that the model we constructed achieved nearly 100% classification accuracy. Mathematical evaluation of the datasets before and after data processing proved the effectiveness of our algorithm. Feature genes were enriched in Parkinson's disease, oxidative phosphorylation, and TGF-beta signaling pathways, which were previously reported to be associated with SCZ. These results suggest that the analysis of gene expression in whole blood by our model could be a useful tool for diagnosing SCZ.

A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction.

In this paper, we propose a novel method for solving the single-trial evoked potential (EP) estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX). The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.

Advanced Interval Type-2 Fuzzy Sliding Mode Control for Robot Manipulator.

In this paper, advanced interval type-2 fuzzy sliding mode control (AIT2FSMC) for robot manipulator is proposed. The proposed AIT2FSMC is a combination of interval type-2 fuzzy system and sliding mode control. For resembling a feedback linearization (FL) control law, interval type-2 fuzzy system is designed. For compensating the approximation error between the FL control law and interval type-2 fuzzy system, sliding mode controller is designed, respectively. The tuning algorithms are derived in the sense of Lyapunov stability theorem. Two-link rigid robot manipulator with nonlinearity is used to test and the simulation results are presented to show the effectiveness of the proposed method that can control unknown system well.

Reversible Data Hiding Based on DNA Computing.

Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security.

Prosthetic joint infection development of an evidence-based diagnostic algorithm.

Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps.

Treatment Algorithm for Chronic Idiopathic Constipation and Constipation-Predominant Irritable Bowel Syndrome Derived from a Canadian National Survey and Needs Assessment on Choices of Therapeutic Agents.

Background. Chronic idiopathic constipation (CIC) and constipation-predominant irritable bowel syndrome (IBS-C) are common functional lower gastrointestinal disorders that impair patients' quality of life. In a national survey, we aimed to evaluate (1) Canadian physician practice patterns in the utilization of therapeutic agents listed in the new ACG and AGA guidelines; (2) physicians satisfaction with these agents for their CIC and IBS-C patients; and (3) the usefulness of these new guidelines in their clinical practice. Methods. A 9-item questionnaire was sent to 350 Canadian specialists to evaluate their clinical practice for the management of CIC and IBS-C. Results. The response rate to the survey was 16% (n = 55). Almost all (96%) respondents followed a standard, stepwise approach for management while they believed that only 24% of referring physicians followed the same approach. Respondents found guanylyl cyclase C (GCC) agonist most satisfying when treating their patients. Among the 69% of respondents who were aware of published guidelines, only 50% found them helpful in prioritizing treatment choices and 69% of respondents indicated that a treatment algorithm, applicable to Canadian practice, would be valuable. Conclusion. Based on this needs assessment, a treatment algorithm was developed to provide clinical guidance in the management of IBS-C and CIC in Canada.