Tips and Tricks for the TensorFlow Developer Certificate Exam

Virajdatt Kohir
8 min readNov 30, 2021

--

The TF Certificate Virajdatt

After contemplating for a couple of months at the start of this year, I prepared and have finally passed the TensorFlow Developer Certificate Exam. In this article, I want to articulate my motivations to complete the exam, the resources I used in preparation for the exam, a brief overview of the exam setting, some helpful snippets for the exam, the revision strategy that I used, organizing materials during the exam and all the code I wrote building up to the exam.

Table of Contents:-

  1. The Motivation
  2. Resources for Preparation
  3. Additional Resources for preparation and upskilling with TensorFlow
  4. Exam Setting
  5. Tips and Tricks
  6. Revision Strategy
  7. Organizing Materials During the Exam
  8. Link to the github Repo

THE MOTIVATION:-

Focus and Consistency are important.

The things that motivated me towards getting the certificate are:-

  1. The exam is skill-based and emphasizes on the ability to build, fine-tune deep learning and machine learning models.
  2. For people who are self-taught in the field of Deep Learning, this certification exam gives a structured approach to hone existing skills.
  3. An accreditation from Google that stamps your skills and lets you become part of a network.

Additional stories and motivations are covered here.

RESOURCES FOR PREPARATION:-

Resources to prepare for the test

A thorough and in-detailed analysis of the resources needed to pass the certification is covered by Daniel Bourke. Highly recommend you go through this article once before reading on. In case you are familiar with the content mentioned here then carry on.

The one definite resource to pass this exam is DeepLearning.AI TensorFlow Developer Professional Certificate on Coursera taught by Laurence Moroney and Andrew Ng. The 4 courses here should give you enough skills to pass and get the certificate.

ADDITIONAL RESOURCES FOR PREPARATION and UPSKILLING WITH TensorFlow:-

Following are additional resources that I believe will help in developing skills that are really valuable for a TensorFlow Developer:-

  1. Understanding the tf.data. This is the modern way of handling data in your TensorFlow projects. The methods mentioned here can be quite exhaustive while beginning here I have shortlisted a few tf.data API’s that I feel are used frequently github . Here is a complete example on keras tutorial for an image classification task that demonstrates the usage of image_dataset_from_directory which creates the image data in tf.data format which leads to building better data cleaning and transforming pipeline for your data in TensorFlow.
Aurélien Géron amazing book on ML and DL

2. Aurélien Géron book “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems 2nd Edition” is another great resource to get a better understanding of Deep Learning with TF and Keras. Especially Chapters 10–16 focus on Deep Learning methods with TF and Keras and cover interesting implementation details. It has good tips on fine-tuning the Neural Networks.

3. The Udacity course Intro to TensorFlow for Deep Learning really helped to better my understanding of Time Series Analysis and how seq-seq and seq-vectors work in TF and Keras.

4. Understanding the tensorflow_dataset module. TensorFlow Dataset(tfds) is an amazing way for loading datasets during your work and especially during practice. The following code demonstrates how powerful tfds is for loading, splitting your data during practice.

import tensorflow_datasets as tfds# 1. Load your data through a dictonary
data, metadata = tfds.load('mnist', as_supervised=True,with_info=True)
train, test = data['train'], data['test']
# 2. Load the train and test data seperately(train, test), metadata = tfds.load('mnist', split=['train', 'test'], as_supervised=True, with_info=True)
# 3. Load the data as train, validation , test
(data, metadata) = tfds.load('mnist',split=['train[:90%]', 'train[90%:]', 'test'],as_supervised=True,with_info=True)

train_data = data[0]
valid_data = data[1]
test_data = data[2]

EXAM SETTING:-

Exam Mieliu, too much time no need to panic!
  1. The exam consists of 5 questions in increasing order of complexity (None of them are too complex).
  2. You are allowed 5 hours to finish the exam. (In my opinion, you won’t need more than 2-3 hours to finish the exam).
  3. The exam is open book and lets you use any reference material throughout the exam.
  4. The exam can be taken anytime, it is not proctored and you can take as many breaks as needed.
  5. The exam grades you on the quality of the deep learning model you build and not the code you write.
  6. You can train your model on google collab, Kaggle notebooks, AWS etc. (Just make sure to download your h5 files and submit them within the pycharm exam environment.)

TIPS and TRICKS:-

Wear them hats and tricks up your sleeves !!
  1. Make sure to practice and have the following callbacks handy:-

a. tf.keras.callbacks.ModelCheckpoint :- This callback will save the best model based on the criteria you have mentioned. The details for this Callback API are present at ModelCheckpoint.

# Code Snippet for ModelCheckpoint
MC = tf.keras.callbacks.ModelCheckpoint('<path-to-saving-model>',
monitor='val_loss',
save_best_only='True',
verbose=1)

b. tf.keras.callbacks.EarlyStopping:- This callback will stop training based on the criteria you provide. This is a really handy callback in the exam as you can run your training for extended amout of epochs and not worry about overfitting as the training will be cutshort when the stop criteria is reached.The details for this Callback API are present at EarlyStopping.

# Code Snippet for EarlyStoppingES = tf.keras.callbacks.EarlyStopping(monitor='val_loss',    patience=5,    
verbose=1,
restore_best_weights='True')

c. tf.keras.callbacks.LearningRateScheduler:- This callback lets you dynamically update the learning rate of an optimizer during training. The details for this Callback API are present at LearningRateScheduler.

# Code Snippet for LearningRateSchedulerLR = keras.callbacks.LearningRateScheduler(lambda epoch: 1e-5 * 10 ** (epoch/2), verbose=1)

2. Develop the habit of using tensorboard to look at your accuracy and loss curves. It is easy with TensorBoard callback in keras. This eliminates the need to use matplotlib and you can vizualize the accuracy and loss of models of multiple runs during your fine-tuning.The details for this Callback API are present at TensorBoard

# Code Snippet for TensorBoardTB = keras.callbacks.TensorBoard('<path-to-save-tensorboard-logs>')

Here is the github link which lists all these callbacks.

3. Keep all the code you have practiced during your practice handy when giving your exam.

4. Here is my list of snippets that I kept handy for the exam :- TF_HANDY_SNIPPETS

5. Sneak Peek into the kind of snippets you would want for the exam:-

a. Snippet to make a single-layer linear regression model.

b. CNN Tasks:-
a. Splitting data from disk into multiple folders (train, test, and validation)
b. ImageDataGenerator with its various transformations (Image Augmentation).
c. Transfer Learning snippet (using InceptionV3, ResNet52, etc..)
d. A few custom CNN models based on the practice you do.

c. NLP Tasks:-
a. Tokenizer code for NLP (text to sequence and padded sequence).
b. Embedding layers (how to create them and visualize )
c. A few RNN, LSTM, GRU, and Conv1D architectures that you used during practice.
d. Loading Text data from CSV and JSON.

d. Time-Series Tasks:-
a. Snippet to create data for time-series prediction.
b. A few RNN, LSTM, GRU, and Conv1D architectures that you used during practice.

6. Don’t panic if the model is over-fitting or under-fitting. Try different architectures based on the problem at hand. Please don’t start with a solution that uses a huge model (a large number of layers/more number of neurons):-

a. Start with a baseline taught during the courses.
b. Incrementally add layers.
c. Change the optimizer and loss function. Keep in mind of the callbacks discussed to avoid overfitting.
d. Make sure you understand the input and output layers based on the task.

7. Finally keep practicing, write the code on your own, build the muscle memory and extract repetitive patterns.

REVISION STRATEGY:-

Here I want to outline my revision strategy during the last week before the exam. I wanted to work with datasets on simple regression with neural networks, building Image Classifier (CNN, Transfer Learning, binary/multi-classification), NLP tasks -text classification/text generation-(RNN’s, CNN’s, LSTM’s and Conv1D), and Time-Series data. These are the sort of tasks you will be expected to work on and excel in the exam.

  1. Image Classification:-
    Data-Set:- MNIST, Fashion-MNIST, Sign-MNIST, Flower Dataset, Cats vs Dogs
  2. Text Classification:-
    Data-Set:- IMDB, BBC, Scarasm dataset and Natural Language Processing with Disaster Tweets (dataset), Shakesphere dataset for text generation
  3. Time-Series:-
    Data-Set:- TimeSeries-Data

Callbacks:-

I fine-tuned my neural networks by configuring callbacks that were outlined above. During revision, I was more hands-on on how these callbacks worked and helped the training purpose.

Fine-Tuning your neural networks:-

In my experience building up to the exam, the following methods of fine-tuning worked for me:-

  1. Change the number of layers in your Neural Networks.
  2. Change the number of neurons in layers in your Neural Networks.
  3. Change the optimizer.
  4. Change the loss function.
  5. Change the batch size.
  6. Increase the number of epochs.

Here is the GitHub link of work that I did during my revision:- https://github.com/Virajdatt/TensorFlow_Cert_Learning/tree/main/revision

ORGANIZING MATERIALS DURING THE EXAM:-

  1. Keep the exam instruction files open throughout the exam.
  2. Keep google collab open, even if you have a powerful GPU for cases when you run into issues.
  3. Keep the codes you have practiced and worked with open. They will come in handy during the exam.

Link to the entire GitHub repo (contains every piece of code I wrote building up to the exam):-

THANK-YOU:-

I hope you had a fun time reading through the article as I had writing it. Please leave a like in case if the content was helpful for you. And you can reach me and talk with me on the following platform in case you have any questions.

Linkedin:- https://www.linkedin.com/in/virajdatt-kohir/
Twitter:- https://twitter.com/kvirajdatt
GitHub:- https://github.com/Virajdatt
GoodReads:- https://www.goodreads.com/user/show/114768501-virajdatt-kohir

--

--

Virajdatt Kohir
Virajdatt Kohir

Written by Virajdatt Kohir

AI in health care with research focused on Deep Learning and LLM. I also love to talk about Machine Learning Engineering. A student for life.

Responses (3)