United States
Support      Solutions        Blog        News       Contact 
DOWNLOAD   /   DISCOVER   /   TUTORIALS    /   VIDEOS   /   STORE   /   ABOUT

Machine Learning                     

Machine learning methods can be a helpful alternative to classical probability-based methods. Instead of using a hypothesis-driven approach with discriminative models, one can employ generative models for data-driven analyses, so that patterns in the data can drive the learning. When approximation is necessary for ill-conditioned or NP-hard problems, meta-heuristics can be employed when there are no exact solutions available. Computational intelligence based on neural adaptive methods such as artificial neural networks, genetic algorithms, and swarm intelligence are also different approaches one can employ to tackle a data analytic problem. These analyses form a complex challenge and require a strong set of skills among analysts, but it also requires a wide variety of computational algorithms.

Gaining new insights in data analysis and establishing new leads for future research requires a combination of data filtering, collapsing, transformation, and application of discriminative and generative models. In addition, the analyst must be able to rapidly condense the results into a format for publication or presentation which is straightforward and understandable. Achieve these goals with NXG Logic. NXG Logic provides a highly developed set of algorithms based on decades of computational expertise which enables analysts to rapidly tackle their hypothesis- and data-driven tasks. NXG Logic also offers advanced machine learning and computational intelligence tools to solve complex analyses.

Text Mining                     

Text mining is a language- and semantic-based technique that attempts to evaluate unstructured text documents for information retrieval. In its simplest form, text mining usually involves K-means cluster analysis of documents to agglomerate together documents having similar term frequencies. Understanding the cluster structure of documents can reveal the major groups of documents present and the concepts portrayed by each group.

The simplest text mining approach involves membership prediction for a particular document cluster using input text for a single document, and then using this information in predictive analytics derived from data mining. For example, text data from customer complaints logged at call centers can be mined for predicting customers who may churn (go away) or return, followed by distribution of custom-tailored advertisement for increased profitability. Hospitals can use text data from customer satisfaction questionnaires to identify inefficiencies in patient care, or identify correlations between worker satisfaction and customer care in specific work units. NXG Logic technology enables you to perform these text mining analyses.

Component Subtraction - Asset Decorrelation                     

Component subtraction is a quantitative finance technique which attempts to remove the ``market'' effect due to overall market-wide sentiment. This is accomplished by decorrelating asset return prices, which can make a portfolio less biased from sentiment, and more adaptive to real market changes. Denoising is a noise removal approach which is commonly used to reduce uncertainty in asset prices over a time horizon. Decorrelation of assets requires use of principal components analysis (PCA) and multivariate regression to remove various levels of correlation. Noise reduction for asset price return histories requires fitting the limit distribution of eigenvalue density (Marcenko-Pastur Law) of the asset correlation matrix, determining the noise cutoff, and then applying multivariate regression to remove noise from price returns.

NXG Logic's technology combines many of the computational steps required in a step-saving fashion to shorten the pipeline and reduce workflow, so that results are readily obtainable and actionable. Portfolio managers can apply decorrelation and denoising to reduce market effects in candidate assets being considered for portfolio usage, and pattern recognition analysts can apply these techniques to better find signals in data. Overall, NXG Logic technology enables you to rapidly decorrelate and denoise a dataset to increase the informativeness of results.

Risk Management                     

Risk managers must be able to perform Monte Carlo cost analysis, fit probability distributions, simulate correlated data, and be able to tackle computations involving numerous probability distributions. The Monte Carlo method assembles the outcome probability distribution for numerous outputs from an equation into which alternative realizations of values are input. The variation in input values are typically from random draws made on probability distributions representing each input parameter. Correlated inputs present an additional challenge which must be dealt with during the generation of inputs.

NXG Logic technology provides advanced procedures for simulating correlated variates from more than 20 probability distributions, and enables users to perform empirical cumulative distribution fitting (ECDF) of data to determine the best fitting distributions and their parameters. Monte Cost analysis can also be performed via spreadsheet input of distributional parameters.

Simulation                     

Simulation involves the process of generating random numbers which follow either a prescribed probability distribution or a set of correlated distributions. NXG Logic has developed a quasi-random number generator based on Sobol sequences as well as a 64-bit pseudo-random generator which has a period of 8.5E37. NXG Logic technology includes the ability to fit data while assessing correlation, determine the best fitting distributions, and then simulate the input dataset using both continuous and discrete variates. Specifically, this technology is employed in our Instructor package for simulating student-specific semester projects for graduate level statistics courses.

NXG Logic's technology enables analysts to fit more than 20 probability distributions to a dataset, and then using the best fitting distributions, simulate a similar dataset for analysis. Monte Carlo uncertainty analysis and cost analysis can also be performed with NXG Logic technology, in order to perform risk management for resource optimization.

BioChips - Microarrays                     

Analyzing biochip and microarray data involves pre-processing input data, identifying novel clusters of features (genes) and objects (animals, subjects, etx.), as well as identifying differentially expressed and co-regulatory genes. These analyses form a complex challenge and require a strong set of skills among analysts, but it also requires a wide variety of computational algorithms.

Gaining new insights in genomics and establishing new leads for future research requires data filtering, collapsing, transformation, and application of discriminative and generative models. In addition, the analyst must be able to rapidly condense the results into a format for publication or presentation which is straightforward and understandable. Achieve these goals with NXG Logic.

NXG Logic provides a highly developed set of algorithms based on decades of computational expertise which enables analysts to rapidly tackle their hypothesis- and data-driven tasks. NXG Logic also offers advanced machine learning and computational intelligence tools to solve complex analyses.

Predictive Analytics                     

Predictive analytics encompasses a large area of business intelligence/analytics and scientific data analysis. Straightforwardly, predictive analytics is commonly based on regression modeling to predict outcomes given a set of specific inputs based on optimal modeling results and case studies involving type of customers or research objects (subjects, animals, etc). Approvals for bank loans, mortgages, and credit card applications are determined by using predictive analytics based on regression model inputs representing customer family or personal income level, education level, marital status, years on the job, home ownership and mortgage balance, credit score, outstanding credit card debt, etc., while the output is the probability of defaulting on payment or bankruptcy. Economic forecasting based on autocorrelated time-series is another type of predictive model, which does not necessarily wholly subsume predictive analytics.

NXG Logic's technology enables analysts to employ multiple linear regression, multivariate linear regression, polytomous (polychotomous) logistic regression, Poisson regression, and failure-time regression based on Cox proportional hazards model. Regression diagnostics for identifying (a) overly influential observations, (b) features with high multicollinearity, and (c) model goodness-of-fit statistics are available for most regression modules.

Customer Relations Management                     

Customer relations management (CRM) aims to drives sales growth, retain customers, and improve relationships with customers by using technologies and strategies to analyze company-buyer interactions throughout the customer life cycle. Analytic data are obtained by CRM systems which compile customer information using information from the web, telephone, chat, social media, marketing materials, and mail. Data in the form of customer purchase history, buying preference, and personal history can be made available to purchasing staff during real-time customer interactions.

The inputs for CRM are commonly considered to be a company's focus on main customers, the company's organizational efficiency, and company's management of customer knowledge, while the outcomes of CRM are market performance metrics such as market share, customer churn, attraction of new customers, customer satisfaction, sales growth, and net profit from sales. NXG Logic's technology enables CRM analysts to perform summary statistics, hypothesis testing, correlation, dimension reduction and comparison, and predictive analytics using various regression methods.

Pattern Recognition                     

Pattern recognition analysis not only requires advanced machine learning techniques, but a well-developed toolkit containing algorithms for data pre-processing and transformations, fuzzification, decorrelation and denoising, inferential hypothesis testing capabilities, class discovery, class prediction, regression techniques, predictive analytics and function approximation approaches. Knowledge discovery typically entails searching for a rich cluster structure in a dataset, which requires additional algorithms involving non-linear manifold learning, dimension reduction, and the ability to rescale data and then apply different distant metrics.

NXG Logic technology combines novel approaches to more rapidly identify patterns in data that otherwise would not be discernible. Over the last several decades, NXG Logic has focused deeply on algorithm development for data pre-processing and transformations of scale and range, which can substantially improve pattern recognition efforts. Novel approaches such as super-resolution root MUSIC can allow researchers to find patterns in data based on holes in geometric projections, essentially leading to the identification of targets because of what information they lack instead of what information they portray. The majority of pattern recognition principles are not wholly statistical and don't require inferential hypothesis testing and discriminitive models, but may involve statistical learning techniques. Pattern recognition instead employs data transformations, decorrelation, denoising, linear and non-linear dimension reduction, generative models, and more contemporary machine learning and knowledge discovery approaches.

Chemoinformatics                     

Chemoinformatics is an information retrieval technique which attempts to evaluate, among other things, quantitative structural activity relationships (QSAR) between molecules of varying structure using secondary sources of information that are supplemental to molecular structure. SMILES strings, or the Simplified Molecular-Input Line-Entry System, is a standard chemical nomenclature for representing molecular structure with ASCII text strings. In its simplest form, text mining usually involves K-means cluster analysis of documents to agglomerate together documents having similar term frequencies. Understanding the cluster structure of documents can reveal the major groups of documents present and the concepts portrayed by each group.

The simplest text mining approach involves membership prediction for a particular document cluster using input text for a single document, and then using this information in predictive analytics derived from data mining. For example, text data from customer complaints logged at call centers can be mined for predicting customers who may churn (go away) or return, followed by distribution of custom-tailored advertisement for increased profitability. Hospitals can use text data from customer satisfaction questionnaires to identify inefficiencies in patient care, or identify correlations between worker satisfaction and customer care in specific work units. NXG Logic technology enables you to perform these text mining analyses.

Expand Your Horizons                     

NXG Logic technology can be employed horizontally or vertically, and enables analysts to view results from a perspective that is quite different from other approaches. This includes short-cutting painstaking and time-consuming steps during analyses in order to rapidly obtain results being sought, plus additional added-value. Hypothesis test results for multiple features are presented in tabular and graphical outputs which increase the utility and informativeness of results. By design, many modules can be used sequentially to accomplish workflow that otherwise would be nearly impossible to achieve using other approaches.

A wide range of methods can be employed, which includes feature transformations, super-resolution root MUSIC, feature selection with cross-validation and greedy hill climbing, linear and non-linear dimension reduction, decorrelating and denoising data, knowledge and class discovery, class prediction, failure time analysis, covariance matrix shrinkage, regression, and simulation.

Machine Learning

Machine learning methods can be a helpful alternative to classical probability-based methods. Instead of using a hypothesis-driven approach with discriminative models, one can employ generative models for data-driven analyses, so that patterns in the data can drive the learning. When approximation is necessary for ill-conditioned or NP-hard problems, meta-heuristics can be employed when there are no exact solutions available. Computational intelligence based on neural adaptive methods such as artificial neural networks, genetic algorithms, and swarm intelligence are also different approaches one can employ to tackle a data analytic problem. These analyses form a complex challenge and require a strong set of skills among analysts, but it also requires a wide variety of computational algorithms.

Gaining new insights in data analysis and establishing new leads for future research requires a combination of data filtering, collapsing, transformation, and application of discriminative and generative models. In addition, the analyst must be able to rapidly condense the results into a format for publication or presentation which is straightforward and understandable. Achieve these goals with NXG Logic. NXG Logic provides a highly developed set of algorithms based on decades of computational expertise which enables analysts to rapidly tackle their hypothesis- and data-driven tasks. NXG Logic also offers advanced machine learning and computational intelligence tools to solve complex analyses.

Text Mining

Text mining is a language- and semantic-based technique that attempts to evaluate unstructured text documents for information retrieval. In its simplest form, text mining usually involves K-means cluster analysis of documents to agglomerate together documents having similar term frequencies. Understanding the cluster structure of documents can reveal the major groups of documents present and the concepts portrayed by each group.

The simplest text mining approach involves membership prediction for a particular document cluster using input text for a single document, and then using this information in predictive analytics derived from data mining. For example, text data from customer complaints logged at call centers can be mined for predicting customers who may churn (go away) or return, followed by distribution of custom-tailored advertisement for increased profitability. Hospitals can use text data from customer satisfaction questionnaires to identify inefficiencies in patient care, or identify correlations between worker satisfaction and customer care in specific work units. NXG Logic technology enables you to perform these text mining analyses.

Component Subtraction - Asset Decorrelation

Component subtraction is a quantitative finance technique which attempts to remove the ``market'' effect due to overall market-wide sentiment. This is accomplished by decorrelating asset return prices, which can make a portfolio less biased from sentiment, and more adaptive to real market changes. Denoising is a noise removal approach which is commonly used to reduce uncertainty in asset prices over a time horizon. Decorrelation of assets requires use of principal components analysis (PCA) and multivariate regression to remove various levels of correlation. Noise reduction for asset price return histories requires fitting the limit distribution of eigenvalue density (Marcenko-Pastur Law) of the asset correlation matrix, determining the noise cutoff, and then applying multivariate regression to remove noise from price returns.

NXG Logic's technology combines many of the computational steps required in a step-saving fashion to shorten the pipeline and reduce workflow, so that results are readily obtainable and actionable. Portfolio managers can apply decorrelation and denoising to reduce market effects in candidate assets being considered for portfolio usage, and pattern recognition analysts can apply these techniques to better find signals in data. Overall, NXG Logic technology enables you to rapidly decorrelate and denoise a dataset to increase the informativeness of results.

Risk Management

Risk managers must be able to perform Monte Carlo cost analysis, fit probability distributions, simulate correlated data, and be able to tackle computations involving numerous probability distributions. The Monte Carlo method assembles the outcome probability distribution for numerous outputs from an equation into which alternative realizations of values are input. The variation in input values are typically from random draws made on probability distributions representing each input parameter. Correlated inputs present an additional challenge which must be dealt with during the generation of inputs.

NXG Logic technology provides advanced procedures for simulating correlated variates from more than 20 probability distributions, and enables users to perform empirical cumulative distribution fitting (ECDF) of data to determine the best fitting distributions and their parameters. Monte Cost analysis can also be performed via spreadsheet input of distributional parameters.

Simulation

Simulation involves the process of generating random numbers which follow either a prescribed probability distribution or a set of correlated distributions. NXG Logic has developed a quasi-random number generator based on Sobol sequences as well as a 64-bit pseudo-random generator which has a period of 8.5E37. NXG Logic technology includes the ability to fit data while assessing correlation, determine the best fitting distributions, and then simulate the input dataset using both continuous and discrete variates. Specifically, this technology is employed in our Instructor package for simulating student-specific semester projects for graduate level statistics courses.

NXG Logic's technology enables analysts to fit more than 20 probability distributions to a dataset, and then using the best fitting distributions, simulate a similar dataset for analysis. Monte Carlo uncertainty analysis and cost analysis can also be performed with NXG Logic technology, in order to perform risk management for resource optimization.

BioChips - DNA Microarrays

Analyzing biochip and microarray data involves pre-processing input data, identifying novel clusters of features (genes) and objects (animals, subjects, etx.), as well as identifying differentially expressed and co-regulatory genes. These analyses form a complex challenge and require a strong set of skills among analysts, but it also requires a wide variety of computational algorithms.

Gaining new insights in genomics and establishing new leads for future research requires data filtering, collapsing, transformation, and application of discriminative and generative models. In addition, the analyst must be able to rapidly condense the results into a format for publication or presentation which is straightforward and understandable. Achieve these goals with NXG Logic.

NXG Logic provides a highly developed set of algorithms based on decades of computational expertise which enables analysts to rapidly tackle their hypothesis- and data-driven tasks. NXG Logic also offers advanced machine learning and computational intelligence tools to solve complex analyses.

Predictive Analytics

Predictive analytics encompasses a large area of business intelligence/analytics and scientific data analysis. Straightforwardly, predictive analytics is commonly based on regression modeling to predict outcomes given a set of specific inputs based on optimal modeling results and case studies involving type of customers or research objects (subjects, animals, etc). Approvals for bank loans, mortgages, and credit card applications are determined by using predictive analytics based on regression model inputs representing customer family or personal income level, education level, marital status, years on the job, home ownership and mortgage balance, credit score, outstanding credit card debt, etc., while the output is the probability of defaulting on payment or bankruptcy. Economic forecasting based on autocorrelated time-series is another type of predictive model, which does not necessarily wholly subsume predictive analytics.

NXG Logic's technology enables analysts to employ multiple linear regression, multivariate linear regression, polytomous (polychotomous) logistic regression, Poisson regression, and failure-time regression based on Cox proportional hazards model. Regression diagnostics for identifying (a) overly influential observations, (b) features with high multicollinearity, and (c) model goodness-of-fit statistics are available for most regression modules.

Customer Relations Management

Customer relations management (CRM) aims to drives sales growth, retain customers, and improve relationships with customers by using technologies and strategies to analyze company-buyer interactions throughout the customer life cycle. Analytic data are obtained by CRM systems which compile customer information using information from the web, telephone, chat, social media, marketing materials, and mail. Data in the form of customer purchase history, buying preference, and personal history can be made available to purchasing staff during real-time customer interactions.

The inputs for CRM are commonly considered to be a company's focus on main customers, the company's organizational efficiency, and company's management of customer knowledge, while the outcomes of CRM are market performance metrics such as market share, customer churn, attraction of new customers, customer satisfaction, sales growth, and net profit from sales. NXG Logic's technology enables CRM analysts to perform summary statistics, hypothesis testing, correlation, dimension reduction and comparison, and predictive analytics using various regression methods.

Pattern Recognition

Pattern recognition analysis not only requires advanced machine learning techniques, but a well-developed toolkit containing algorithms for data pre-processing and transformations, fuzzification, decorrelation and denoising, inferential hypothesis testing capabilities, class discovery, class prediction, regression techniques, predictive analytics and function approximation approaches. Knowledge discovery typically entails searching for a rich cluster structure in a dataset, which requires additional algorithms involving non-linear manifold learning, dimension reduction, and the ability to rescale data and then apply different distant metrics.

NXG Logic technology combines novel approaches to more rapidly identify patterns in data that otherwise would not be discernible. Over the last several decades, NXG Logic has focused deeply on algorithm development for data pre-processing and transformations of scale and range, which can substantially improve pattern recognition efforts. Novel approaches such as super-resolution root MUSIC can allow researchers to find patterns in data based on holes in geometric projections, essentially leading to the identification of targets because of what information they lack instead of what information they portray. The majority of pattern recognition principles are not wholly statistical and don't require inferential hypothesis testing and discriminitive models, but may involve statistical learning techniques. Pattern recognition instead employs data transformations, decorrelation, denoising, linear and non-linear dimension reduction, generative models, and more contemporary machine learning and knowledge discovery approaches.

Simplify Lectures

Graduate-level statistics instructors must not only be able to create semester course packs and solution manuals, but create lecture materials, slides, quizzes, exams, projects, while at the same time keep up with collaborative research projects, grant writing/submission, and publication. With the probability of grant funding ever-decreasing, more time must be devoted for grant-writing and publication, which is the currency of academics and promotion. However, this reduces the time available for managing lecture responsibilities, grading, and mentoring students who are developing their thesis or dissertation.

NXG Logic's technology developments now enable instructors to save time for teaching by enabling them to rapidly develop course materials needed for graduate-level statistics courses. Using the Instructor package for Windows, lecturers can rapidly create quizzes, exams, projects, course packs, as well as their associated grading keys with fully worked solutions. In addition, student-specific instruments can be generated with student-specific grading keys and worked solutions. Student semester projects and their datasets can be randomly generated with so that no two datasets are the same. Altogether, all instruments can be made unique, so that each student has an entirely different set of problems to work on. This can help to increase student ethical responsibility and reduce student dishonesty (cheating).

Chemoinformatics

Chemoinformatics is an information retrieval technique which attempts to evaluate, among other things, quantitative structural activity relationships (QSAR) between molecules of varying structure using secondary sources of information that are supplemental to molecular structure. SMILES strings, or the Simplified Molecular-Input Line-Entry System, is a standard chemical nomenclature for representing molecular structure with ASCII text strings. In its simplest form, text mining usually involves $K$-means cluster analysis of documents to agglomerate together documents having similar term frequencies. Understanding the cluster structure of documents can reveal the major groups of documents present and the concepts portrayed by each group.

The simplest text mining approach involves membership prediction for a particular document cluster using input text for a single document, and then using this information in predictive analytics derived from data mining. For example, text data from customer complaints logged at call centers can be mined for predicting customers who may churn (go away) or return, followed by distribution of custom-tailored advertisement for increased profitability. Hospitals can use text data from customer satisfaction questionnaires to identify inefficiencies in patient care, or identify correlations between worker satisfaction and customer care in specific work units. NXG Logic technology enables you to perform these text mining analyses.

Expand Your Horizons

NXG Logic technology can be employed horizontally or vertically, and enables analysts to view results from a perspective that is quite different from other approaches. This includes short-cutting painstaking and time-consuming steps during analyses in order to rapidly obtain results being sought, plus additional added-value. Hypothesis test results for multiple features are presented in tabular and graphical outputs which increase the utility and informativeness of results. By design, many modules can be used sequentially to accomplish workflow that otherwise would be nearly impossible to achieve using other approaches.

A wide range of methods can be employed, which includes feature transformations, super-resolution root MUSIC, feature selection with cross-validation and greedy hill climbing, linear and non-linear dimension reduction, decorrelating and denoising data, knowledge and class discovery, class prediction, failure time analysis, covariance matrix shrinkage, regression, and simulation.