Academic Catalog

C S 12B: DEEP LEARNING

Foothill College Course Outline of Record

Foothill College Course Outline of Record
Heading Value
Effective Term: Winter 2026
Units: 4.5
Hours: 4 lecture, 2 laboratory per week (72 total per quarter)
Prerequisite: C S 12A.
Degree & Credit Status: Degree-Applicable Credit Course
Foothill GE: Non-GE
Transferable: CSU/UC
Grade Type: Letter Grade (Request for Pass/No Pass)
Repeatability: Not Repeatable

Student Learning Outcomes

  • Explain different optimization strategies and identify appropriate methods for specific tasks
  • Recognize the potential for bias and analyze the ethical implications of Deep Learning applications in different domains.
  • Understand the need for explainability in certain domains and the challenges of interpreting increasingly complex networks
  • Use Python packages to train and evaluate deep learning models, including Feedforward, Convolutional, and Recurrent Neural Networks

Description

This course offers an introduction to deep learning theories, principles, and practices. Students will explore neural networks, including perceptrons, gradient descent, and multilayer perceptrons, as well as advanced topics like convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), variational autoencoders (VAEs), and attention mechanisms. By the end of the course, students will be proficient in implementing and training neural networks using frameworks like TensorFlow, Keras, scikit-learn, and PyTorch, and will be able to critically evaluate and improve deep learning models.

Course Objectives

The student will be able to:

  1. Demonstrate foundational knowledge of machine learning principles
  2. Apply selected concepts from linear algebra and calculus to deep learning
  3. Create, train, and evaluate a feed-forward network using publicly available packages
  4. Identify techniques for optimizing models
  5. Create, train, and evaluate a recurrent neural network using publicly available packages
  6. Create, train, and evaluate a convolutional neural network using publicly available packages
  7. Explain the concept and use of generative models
  8. Articulate paths for future exploration of deep learning
  9. Recognize deep learning as a tool that can reduce or amplify problems in society

Course Content

  1. Machine learning foundations
    1. Training and evaluation
    2. Bias-variance tradeoff
    3. Logistic regression
    4. Perceptrons
  2. Concepts from linear algebra and calculus
    1. Vector and matrix operations
    2. Gradient and partial derivatives
    3. Chain rule for backpropagation
  3. Feed-forward networks
    1. Architectural components
    2. Forward propagation
    3. Loss and cost functions
    4. Backpropagation and gradient descent
    5. Learning rate and convergence/divergence
  4. Optimizations
    1. Hyperparameter tuning
    2. Stochastic gradient descent and mini-batching
    3. Regularization and dropout
    4. Weight initialization
    5. Batch normalization
  5. Recurrent neural networks
    1. Types of sequential data
    2. Simple recurrent architecture
    3. Vanishing/exploding gradients
    4. Long short-term memory networks
    5. Gated recurrent units
    6. Bidirectional RNNs
  6. Convolutional neural networks (CNNs)
    1. Computer vision applications
    2. Convolutional filters
    3. Activation maps
    4. Pooling layers
  7. Generative models
    1. Discriminative vs. generative tasks
    2. Generative adversarial networks
    3. Variational autoencoders
  8. Additional topics for exploration
    1. Transfer learning
    2. Deep reinforcement learning
    3. Self-supervised learning
    4. End-to-end learning
  9. Ethics and explainability

Lab Content

  1. Environment setup
    1. Navigating Jupyter notebooks or chosen environment
    2. Running code cells, troubleshooting common errors
    3. Installing and importing DL frameworks (e.g., PyTorch or TensorFlow)
  2. Implementing basic neural networks
    1. Building a single-layer perceptron and training on a small dataset
    2. Adding hidden layers to form an MLP
    3. Evaluating performance with basic metrics
  3. Hyperparameter experiments
    1. Adjusting learning rate, batch size, and hidden units
    2. Tracking accuracy and loss over epochs
    3. Using a validation set for early stopping
  4. Advanced training techniques
    1. Adding batch normalization and experimenting with different optimizers
    2. Trying different activation functions (ReLU, LeakyReLU)
    3. Comparing training stability and convergence speed
  5. Working with CNNs
    1. Implementing a simple CNN for image classification
    2. Inspecting feature maps
  6. Transfer learning
    1. Loading a pre-trained model
    2. Fine-tuning on a custom dataset
    3. Observing improvements in training time and accuracy
  7. Sequence models
    1. Training a simple RNN or LSTM for time-series forecasting
    2. Adjusting sequence length and analyzing results
    3. Comparing RNN outputs to feed-forward network predictions
  8. Exploring transformers
    1. Running inference with a pre-trained transformer model for classification
    2. Observing improvements over RNN-based models
  9. Generative modeling
    1. Implementing a simple GAN
    2. Examining generated samples and adjusting training steps
    3. Noting stability issues in GAN training
  10. Data pipeline management
    1. Demonstrating shuffling, batching, caching, and prefetching with a chosen framework
    2. Handling categorical or structured data (conceptually with code snippets)
  11. Responsible artificial intelligence in practice
    1. Checking for bias in a dataset (e.g., class imbalance)
    2. Interpreting and explaining a model's behavior
    3. Discussing potential mitigations to bias and documenting findings
  12. Project integration and review
    1. Combining steps: data preprocessing --> model building --> training --> evaluation
    2. Implementing a chosen project (e.g., image classification with a pre-trained model)
    3. Reporting results, plotting training curves, showing example predictions

Special Facilities and/or Equipment

1. The college will provide access to a computer laboratory with Python and an IDE installed, with sufficient privileges to allow students to install Python packages.
2. The college will provide a website or course management system with an assignment posting component (through which all lab assignments are to be submitted) and a forum component (where students can discuss course material and receive help from the instructor). This applies to all sections, including on-campus (i.e., face-to-face) offerings.
3. When taught online, the college will provide a fully functional and maintained course management system through which the instructor and students can interact.
4. When taught online, students must have currently existing email accounts and ongoing access to computers with internet capabilities.

Method(s) of Evaluation

Methods of Evaluation may include but are not limited to the following:

Tests and quizzes
Lab notebook
Written laboratory assignments which include source code, sample runs, and documentation
Reflective papers
Final examination or project

Method(s) of Instruction

Methods of Instruction may include but are not limited to the following:

Instructor-authored lectures which include mathematical foundations, theoretical motivation, and coding implementation of deep learning algorithms
Detailed review of assignments which includes model solutions and specific comments on the student submissions
Discussion which engages students and instructor in an ongoing dialog about deep learning
Instructor-authored labs that rigorously demonstrate a student's ability to implement deep learning models

Representative Text(s) and Other Materials

Chollet, François. Deep Learning with Python, 2nd ed.. 2021.

Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 3rd ed.. 2022.

Raschka, Sebastian, Yuxi (Hayden) Liu, and Vahid Mirjalili. Machine Learning with PyTorch and Scikit-Learn. 2022.

Types and/or Examples of Required Reading, Writing, and Outside of Class Assignments

  1. Reading
    1. Textbook assigned reading averaging 30 pages per week
    2. Reading the supplied handouts and modules averaging 10 pages per week
    3. Reading online resources as directed by instructor though links pertinent to programming
    4. Reading library and reference material directed by instructor through course handouts
  2. Writing
    1. Writing technical prose documentation that supports and describes the programs that are submitted for grades

Discipline(s)

Computer Science