Machine Learning Algorithms and Training Methods: A Decision-Making Flowchart

Machine learning is set to transform investment management. However, many investment professionals are still building their understanding of how machine learning works and how to apply it. With that in mind, what follows is an introduction to machine learning training methods and a machine learning decision-making flowchart with explanatory footnotes that can help determine what kind of approach s ‘must apply depending on the final objective.

Subscribe button

Machine learning training methods

1. Joint learning

No matter how well selected, each machine learning algorithm will have a certain error rate and will be prone to noisy predictions. Ensemble learning addresses these flaws by combining predictions from multiple algorithms and averaging the results. This reduces noise and therefore produces more accurate and stable predictions than the best single model. In fact, learning solutions as a whole have won many prestigious machine learning competitions over the years.

Joint learning brings together heterogeneous or homogeneous learners. Heterogeneous learners are different types of algorithms that are combined with a voting classifier. Homogeneous learners, on the other hand, are combinations of the same algorithm using different training data based on bootstrap aggregation or wrapping technique.

2. Reinforcement learning

As virtual reality applications resemble real-world environments, trial-and-error machine learning approaches can be applied to financial markets. Reinforcement learning algorithms distill knowledge by interacting with each other and from the data generated by the algorithm itself. They also use supervised or unsupervised deep neural networks (DNNs) in deep learning (DL).

Reinforcement learning made headlines when DeepMind’s AlphaGo program beat the reigning world champion at the ancient game of Go in 2017. The AlphaGo algorithm includes an agent designed to execute actions that maximize rewards over the time while also taking into account the limitations of their environment.

File for FinTech, Data and AI courses

Reinforcement learning with unsupervised learning does not have direct labeled data for each observation or instant feedback. Rather, the algorithm must observe its environment, learn by trying new actions, some of which may not be immediately optimal, and reapply its previous experiences. Learning occurs through trial and error.

Academics and practitioners are applying reinforcement learning to investment strategies: the agent could be a virtual trader that follows certain trading rules (stocks) in a specific market (environment) to maximize its profits (rewards). However, whether reinforcement learning can navigate the complexities of financial markets is still an open question.

Flowchart of machine learning decision making

Graph of the machine learning decision making flowchart


1. Principal component analysis (PCA) is an indicator of the complexity of the prediction model and helps reduce the number of features or dimensions. If the data has many highly correlated Xi features, or inputs, then a PCA can perform a basis shift on the data so that only the principal components with the highest explanatory power for the variance of the features are selected. A set of n linearly independent and orthogonal vectors — in which n is a natural number, or non-negative integer, is called a base. Inputs are characteristics of machine learning, while inputs are called explanatory or independent variables in linear regression and other traditional statistical methods. Likewise, a goal Y (output) in machine learning is an explained, or dependent, variable in statistical methods.

2. Natural language processing (NLP) includes, but is not limited to, sentiment analysis of textual data. It typically has multiple supervised and unsupervised learning steps and is often considered self-supervised as it has both supervised and unsupervised properties.

Advertising piece for artificial intelligence in asset management

3. Simple or multiple linear regression without regularization (penalization) is usually classified as a traditional statistical technique but not as a machine learning method.

4. Loop regression, or L1 regularization, and ridge regression, or L2 regularization, are regularization techniques that avoid overfitting with the help of penalty. Simply put, lasso is used to reduce the number of features, or feature selection, while Ridge maintains the number of features. Lasso tends to simplify the target prediction model, while ridge can be more complex and handle multi-collinearity of features. Both regularization techniques can be applied not only with statistical methods, including linear regression, but also in machine learning, such as deep learning, to deal with nonlinear relationships between targets and features.

5. Machine-based applications that use a deep neural network (DNN) are often called deep learning. Target values ​​are continuous numeric data. Deep learning has hyperparameters (eg, number of epochs and regularization learning rate), which are given and optimized by humans, not deep learning algorithms.

6. Classification and regression trees (CARTs) and random forests have target values ​​that are discrete or categorical data.

7. The cluster number K — one of the hyperparameters — is input provided by a human.

8. Hierarchical clustering is an algorithm that groups similar input data into clusters. The number of clusters is determined by the algorithm, not by direct human input.

9. The K-nearest neighbors (KNN) algorithm can also be used for regression. The KNN algorithm needs a set of neighbors (classifications) provided by a human as a hyperparameter. The KNN algorithm can also be used for regression, but is omitted for simplicity.

10. Support vector machines (SVMs) are sets of supervised learning methods applied to linear classification but also using non-linear classification and regression.

11. Naïve Bayes classifiers are probabilistic and apply Bayes theorem under strong (naive) independence assumptions between features.

AI Pioneers in Investment Management


Kathleen DeRose, CFA, Matthew Dixon, PhD, FRM and Christophe Le Lannou. 2021. “Automatic learning”. CFA Institute Update reading. CFA Program 2022 Level II, Reading 4.

Robert Kissell, PhD, and Barbara J. Mack. 2019. “Fintech in Investment Management”. CFA Institute Update reading2022 CFA Program Level I, Reading 55.

If you liked this post, don’t forget to subscribe to Entrepreneurial investor.

All posts are the opinion of the author. Therefore, they should not be construed as investment advice, nor do the views expressed necessarily reflect the views of the CFA Institute or the author’s employer.

Image credit: ©Getty Images/Jorg Greuel

Professional training for CFA Institute members

CFA Institute members have the power to self-determine and self-report professional learning (PL) credits earned, including content on Entrepreneurial investor. Members can easily register credits using their online PL tracker.

Yoshimasa Satoh, CFA

Yoshimasa Satoh, CFA, is a director of the Nasdaq. He is also a board member of CFA Society Japan and a regular member of CFA Society Sydney. Throughout his career, he has been responsible for the management and development of multi-asset portfolios, trading, technology and data science research and development. Previously, he served as a portfolio manager of quantitative investment strategies at Goldman Sachs Asset Management and other firms. He began his career at Nomura Research Institute, where he led the equity trading technology team at Nomura Securities. He earned the CFA Institute Certificate in ESG Investing and holds a bachelor’s and master’s degree in engineering from the University of Tsukuba.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *