Learning Objectives
- Optimization
- Architecture
- Data Consideration
- Training to Optimize
- ML Considerations (Regularization & Overfitting)
Learning Objectives
- NN vie as Linear Classifiers
- Computation Graphs
- Backpropagation
- Backprop via AutoDiff
- DAG Logistic Regression (Sigmoid)
- Simple Layer Jacobians and Vectorization
Learning Objectives
- What is neural network?
- Supervised Learning
- Parametric Learning
- Performance Measurement
- Linear Algebra Review
- DL v ML differences
- Logistic Regression and Gradient Descent
- Dyna-Q Big Picture
- Learning T
- Learning R
- Dyna-Q Recap
- What is Q
- Learning Procedure
- Update Rule
- Two Finer Points
- The Trading Problem: Actions
- Creating the State
- Discretizing
- Q-Learning Recap
- The RL Problem
- Mapping Trading to RL
- Markov Decision Problems
- Unknown Transitions and Rewards
- What to Optimize
- RL Summary
- Ensemble Learners
- Bootstrap aggregating: Bagging
- Boosting
- A closer look at KNN solutions
- What happens as K varies quiz
- What happens as D varies quiz
- Metric 1: RMS error
- In sample vs. Out of sample
- Cross validation
- Roll forward cross validation
- Metric 2: Correlation
- Overfitting
- Parametric Regression
- K-nearest neighbor
- Kernel regression
- Training and testing
- Learning APIs
- The ML problem
- Supervised Regression Learning
- How it works with stock data
- Backtesting