15 Amazing Deep Learning Project Ideas to Boost Your Resume 您所在的位置:网站首页 document-intensive 15 Amazing Deep Learning Project Ideas to Boost Your Resume

15 Amazing Deep Learning Project Ideas to Boost Your Resume

2023-10-17 18:00| 来源: 网络整理| 查看: 265

15 Amazing Deep Learning Project Ideas to Boost Your ResumeManika NagpalProjectPro

Manika Nagpal

·

Follow

Published in

ProjectPro

·19 min read·Sep 22, 2021

--

Before diving into the project ideas for Deep learning, I want to share one of my job interview experiences with a company located in Mumbai.

During my college days in 2018, I applied for a company in the Artificial Intelligence industry that provides technology solutions using AI, ML, and Deep learning for various clients. My interview was scheduled at 10 AM, and I reached there early. When I entered the office lobby entrance, I heard a voice greeting me, “Good Morning, Welcome,” and I was surprisingly looking to locate where the voice came from. I thought this might be a simple solution (a speaker based on some sensor) that greets anyone visiting the office. Then to my surprise, another person who seemed to be an employee of the company entered, and the voice greeted him with the same message but also including the person’s name. The TV monitor recognised the employees, opened the main door, and recorded their entrance time.

For visitors like me, the touch-screen TV had many options like ‘meeting’, ‘interview’, etc. and based on the choice we selected, the interview room lights up showing the way. We just had to follow the lights to go to the interview room, and it required no human intervention. The TV monitor could also place a reminder call to the person we have a meeting or interview scheduled with. The TV was an AI Receptionist!

So if you are breaking your head thinking (just like me on that day) how this could be working, the answer is Deep learning! If you are new to Deep Learning and want to know what it is and how deep learning algorithms work, please go through the article.

The complex data-driven applications, including voice and image recognition, analytics have led to the prominence of the global deep learning market. The increasing need for interaction between humans and machines offers new growth avenues to industry providers to provide enhanced solutions and capabilities.

Image source — P&S Intelligence

According to Burning Glass, an analytics software company that collects millions of job postings and analyses them, the jobs requiring AI and deep learning skills will grow to an astounding 39.3% over the decade.

If we search for Deep learning jobs on LinkedIn, we could see there are 44,000+ results across World.

Image Source — LinkedIn

A growing number of technology jobs request AI and deep learning skills, and not knowing something about these technologies could prove detrimental in a few years. The good news is that you don’t need advanced degrees to land a Deep learning Engineer/ Researcher job. The best way to land a job in this fast-paced, growing Digital arena is to learn and practice.

Deep Learning Projects

We have come up with a list of 15 deep learning projects to practice to help you establish your mettle and showcase it to the world. The projects have been divided into the following categories so that you can choose one as per your requirements.

◉ Projects in Deep Learning for Undergraduate Students

◉ Deep Learning Projects for Beginners

◉ Deep Learning Computer Vision Projects

◉ Interesting Deep Learning Projects for Intermediate Professionals

Each of the categories has cool deep learning projects ideas that you will enjoy working on. Let us now start with the first category.

Projects in Deep Learning for Undergraduate Students

This section has projects on deep learning that are useful for university students.

#1 OCR using YOLO and Tesseract for Text Extraction

Do you remember giving offline MCQs-based competitive exams where we had to fill in the dots (A, B, C, D) for the correct option? These test papers were later evaluated using Optical Character Recognition to determine whether the option filled was the correct one or not.

Business Use Case -Invoices and Receipts are an essential aspect of any trade between two parties, be it between companies or between roadside stores and a consumer. Manually reconciling digital invoices is very time-consuming and can also cause manual human errors.

Extracting information from any document is an uphill task and involves a combination of object classification and object localisation.

OCR digitisation addresses the challenge of automatically extracting, which plays a critical role in streamlining document-intensive processes and office automation in many financial, accounting, and taxation areas.

Image source — PyImageSearch

Dataset description -

There are various datasets available to play around with OCR. These datasets include images of invoices, receipts, and bill payments and a variety of documents like memos, emails, forms, news articles, resumes, etc.

FUNSD dataset — consists of form imagesCMU datasetICDAR Scanned Receipts OCR and Information ExtractionKaggle Receipt OCR

The ICDAR dataset consists of 1000+ scanned receipt images. The receipt images contain four key text fields, goods name, unit price, total cost, etc.

Example of the scanned receipt -

Image Source — ICDAR

TECH STACK:

Language: Python, Object detection: YOLO V4, Text Recognition: Tesseract OCR

Key learnings

Understanding Object detectionUnderstanding how to use pre-trained models of YOLOTraining Custom Object Detector with YOLOUnderstanding Text extraction using Tesseract

If you are looking for complete code implementation and videos to understand the step-by-step solution approach for this deep learning project, please check out this Build OCR from Scratch Python using YOLO and Tesseract.

#2 Store Item demand forecasting using Deep Learning

Business Objective — There is an old saying “Retail is detailed at large scale.” Every retailer must stay on top of planning activity to stand the demand of goods based on needs.

A highly accurate demand forecast is the only way retailers can predict which goods are needed for each store location. This will also ensure high availability for customers while maintaining minimal stock risk and support capacity management, store staff labour force planning, etc.

Building a forecasting model to predict a store item demand is an uphill task as there are multiple external factors like store’s location, seasonality, changes in a store’s neighbourhood or competitive situation, a significant variation in the number of customers, and goods, etc. With this huge amount of data, no human planner could consider the full range of potential factors. However, deep learning makes it easier and considers these factors at a detailed level, by individual store or fulfilment channel.

We will use LSTM, which is very suitable for handling time-series data and widely used for forecasting purposes.

Dataset description

We can use the below datasets which are available for free for practicing and building forecasting models.

Kaggle Store Item Demand ForecastingForecast for product demandUCI demand forecasting for a store

The Kaggle Store Item Demand Forecasting dataset has the following set of files

train.csv — Training data

test.csv — Test data

The fields included in the dataset are as below,

date — Date of the sales data.

store — Store ID

item — Item ID

sales — The count of items sold at a particular

Tech stack:-

PySpark Dependencies: PySpark.ml, PySpark.sql

Python Dependencies: Keras, NumPy, Pandas, sklearn, Matplotlib, LSTM

We will use PySpark to handle real-time streaming data (as the sales of the items and goods have to be addressed in real-time) and build highly scalable models.

Key learnings:-

Understanding the concept of forecastingUnderstanding architecture of LSTM and implementation in forecastingDealing with seasonality in forecasting

If you are looking for complete code implementation along with videos to understand the step-by-step approach please check out this Deep Learning Project on Store Item Demand Forecasting.

#3 Medical Image segmentation using Deep learning

Business Objective: In the healthcare and medical sciences domain, the use of Deep learning technologies is increasing at a brisk rate. These technologies have even out-performed medical experts and doctors in some cases by producing astonishing and accurate results. Polyp recognition and segmentation is one such technology that helps doctors identify polyps from colonoscopy images.

The ability of deep learning algorithms to blend with image data has made it possible to use deep learning to detect cancer and other deadly diseases among human beings.

Dataset Description

Kaggle CVC-ClinicDB datasetKVasir DatasetETIS-Larib Polyp DB

CVC-Clinic data consists of frames that are extracted from colonoscopy videos. The dataset consists of several examples of polyp frames and also corresponding ground truth images.

CVC-ClinicDB database consists of two sets of images:

Original images: frame_number.tiffolyp mask: frame_number.tiff

Tech Stack

Language: Python, Deep learning library: Pytorch, Computer vision library: OpenCV

The other python libraries required are Scikit-learn, albumentations, pandas, NumPy, etc.

Recommended Reading: 15 Computer Vision Project Ideas for Beginners in 2021

Key learnings from the project:-

Data augmentation using PyTorchUnderstanding VGG architecture and building VGG blocks using PytorchTraining and predicting Unet++ modelsUnderstanding Computer vision and its applications in the medical field

If you are looking for complete code implementation and videos to understand the step-by-step approach, please check out Medical Image Segmentation Deep Learning Project.

Deep Learning Projects for Beginners

This section enlists those projects in deep learning that are simple and straightforward to implement. For those who have worked on a few deep learning projects ideas before and want to step up their game as a beginner, this section will prove to be highly relevant.

#1 Fake News Detection Deep Learning Project

Business Use Case — Checking any information’s authenticity for printed and digital media has been a longstanding issue affecting businesses and society. The news media has evolved from the printed newspapers and magazines to a digital form such as mobile news applications, blogs, social media applications, etc. You can get an update on Twitter much faster than even the news channels and mobile apps, which provide real-time feed.

Due to the digital age of mobile applications, it has become easier for consumers to acquire the latest news at their fingertips. But whatever we are reading on these platforms, are they true every time? The answer is no.

Image source — giphy.com

Let’s take a real-time example of our favourite chat application Whatsapp. You would have received numerous messages for curing and preventing the COVID-19 virus. These messages are often fake, and the sad part is many people believe in these forwards and even often follow them, which has also resulted in some dangerous results.

It is essential and indispensable to identify and differentiate Fake news. One of the ways to determine fake news is by performing a fact check of every news story with experts. Still, practically it is impossible to do so, and this is a very time-consuming process, and we need experts from different areas who are skilled to verify this news. One of the ways is to determine by performing a fact check of every news story with experts. The other way is to use Deep Learning and AI to automate the detection of Fake news.

Companies like Facebook, Google, etc., are using AI to detect and remove false news from their platforms.

Datasets for this Deep Learning Project

We have numerous datasets available online to practice.

ISOT Fake News Dataset The dataset consists of several articles having both Real and Fake news. The articles are of different types, including World-News, Politics news, etc.Fake News Content Detection from Kaggle

Fake News dataset from Kaggle is a complete training dataset with the following attributes -

id: unique id for article

title: title of the article

author: author-name

text: text of the article

label: a label for an article that determines whether the article is potentially unreliable or not

1 -> fake news

0 -> true news

Tech Stack -

Language: Python, Libraries: Scikit-learn, Glove, Flask, nltk, pandas, NumPy, Tensorflow, Keras.

Solution Methodology

Step 1: After importing the dataset, the first step is to preprocess the data using different techniques like tokenising, stemming, removing stop words, etc, using NLTK.

Step 2: Feature Extraction using TF, TF-IDF.

Step 3: Training the classifier with Tensorflow, Keras, and Glove to build a classifier and predict fake and true news

Image Source — SemanticScholars

Key Learnings from the Project -

Text tokenization using Keras tokenizer and explaining Text vectorisation and Word embeddingUnderstanding Sequence neural network approach algorithms like RNN, GRU, and LSTMPerform text preprocessing like removal of stop words, lemmatisation, stemming, etcBuild Word embedding layer with Glove

If you are looking for complete code implementation and videos to understand the step-by-step approach, please check out NLP and Deep Learning For Fake News Classification in Python.

#2 Colouring old B&W photos

Business Objective:- We all love colours, don’t we? If your friend suggests to you some very old movie and in B&W, you will be a little reluctant to watch it even if it is a perfect movie. There is also a trend in the Entertainment Industry where people are re-releasing the old famous B&W movies to attract new audiences.

Image Source

Automated image colourisation of B&W images has become a hot topic of exploration in computer vision and deep learning. Image colourisation takes a grayscale (black and white) image as an input and then produces a colourisedvery old movie image as an output.

The colourisedfilms image obtained as output should represent and match the input’s semantic colours and tones.

For example, the colour of a sky on a clear sunny day must be “blue”, and it can’t be coloured “pink” by the model.

Dataset description:-

Image Net datasetHumpback whale identificationMIRFLICKR dataset

Tech stack — OpenCV, Caffe, Python, Numpy

Key learnings from the project:-

Understanding Lab Color space colourisation techniqueConverting images to RGB space and also to Lab colour spaceUnderstanding the working of OpenCVUsing Caffe library in image colourisation and model trainingDeep Learning Computer Vision Projects

If you are specifically interested in deep learning and visual computing based projects, go through the list below that has exciting such projects.

#1 Image Segmentation using Mask R-CNN with TensorFlow

Business Objective: Fire is an abnormal event that can quickly cause significant injury and property damage in a very concise time frame. The best possible way to reduce the wreckage caused by fire is to detect the fire source as early as we can before it spreads and reaches the point of no return.

To detect fire, we can adopt an RGB model and Image localisation strategy. The RGB model is based on chromatic and disorder measurement that extracts fire and smoke pixels. In contrast, image localisation helps to detect the location of a single object (fire in our case) in any given image.

To solve segmentation problems, we can use the Mask RCNN algorithm.

Dataset Description

There are multiple datasets available consisting of video clips and images to get our hands dirty and predict whether there is a fire or not and how dangerous it is.

Faculty of Electrical Engineering, Split UniversityKaggle Fire DatasetGithub Fire datasetKaggle Fire detection test

The Kaggle Fire Dataset consists of 2 folders -

fire_images folder -> A set of 755 outdoor-fire images and some of them containing heavy smokenon_fire_images ->collectionset of 244 nature images like forest, tree, grass, river, people, etc.

Tech stack

Language — Python, Libraries — NumPy, Keras, TensorFlow, Mask R-CNN, scikit-image

Key learnings from this Deep learning Project

Understanding the role of Backbone in the Mask RCNN model.Performing image annotation using VGG Annotator.Understanding the concept of transfer learningUnderstanding the concepts like ROI Classifier and Region Proposal Network (RPN)Understanding the concepts of image localisation, image segmentation, and image detection

If you are looking for complete code implementation along with videos to understand the step-by-step approach for this deep learning project, please check out this Image Segmentation using Mask R-CNN with Tensorflow.

#2 Image caption generator using deep learning

Business Objective:- Caption generation is a challenging AI problem where the objective is to generate a textual description given an image. It is widely used to help visually impaired people understand the content of images better.

It requires both methods

Language models from the field of Natural language processing (NLP) to understand the image and convert it into words in the right order.Knowledge of computer vision to understand the content of the image and to interpret it.But deep learning methods have achieved great results on examples of this problem.

Dataset description:-

Three datasets: Flickr8k, Flickr30k, and MS COCO are viral datasets among the data science community members.

In the Flickr8k dataset, each collected image is associated with five different captions that describe the entities and events depicted during the capturing of the image.

Flickr8k is an excellent dataset to start with as it is small in size and can be trained quickly on low-end laptops/desktops with no GPU required.

Our dataset structure is as follows:-

Flick8k/

Flick8k_Dataset/ :- contains the 8000 images

Flick8k_Text/

Flickr8k.token.txt:- contains image id along with five captions for the image

Flickr8k.trainImages.txt:- contains the training image id’s

Flickr8k.testImages.txt:- contains the test image id’s

Tech Stack

Python, LSTM, CNN, NLP, Numpy, NLTK, Keras, TensorFlow

Key learnings -

Image preprocessing using KerasCreating a vocabulary for an imageText preprocessing and tokenisingA working implementation of CNN and LSTM algorithms#3 Face detection system

Business Objective: This is one of the excellent deep learning project ideas, especially for beginners to start with, and the reason is it is easy and fun to work upon.

Facial recognition technology has developed rapidly over the years and is now used widely in our day-to-day life. From unlocking your phone to using playful Instagram, Snapchat filters in security and surveillance offices/buildings. I know you would be curious about how to create a facial recognition system by yourself?

Face recognition technology is a subset of the Object Detection field that mainly focuses on observing the instance of the semantic objects. The primary purpose of it is to track and visualise human faces within digital images.

Datasets description:-

Real and Fake KaggleGoogle facial expression comparison datasetFace Images with Marked Landmark PointsFacial Keypoint Detection

The Facial Keypoint Detection from Kaggle consists of 15 key points, which represent the elements of the face:

left_eye_center, right_eye_center, left_eye_inner_corner, left_eye_outer_corner, etc.

Data files

training.csv: list of training 7049 images. Each row contains the image data as a row-ordered list of pixels and the (x,y) coordinates for 15 key points.

test.csv: list of 1783 test images. Each row contains image data as a row-ordered list of pixels and ImageId

Tech Stack:-

Python, OpenCV, face_recognition library, Glob, Numpy

Recommended Reading: 15 Object Detection Project Ideas with Source Code for Practice

Key learnings from the project:-

Using Computer Vision and OpenCV for building face recognition systemUnderstanding concept of Image ProcessingAdvanced object detection techniques based on position#4 Human Pose Estimation using Deep Learning

Business Objective: Knowing the orientation of a person has several real-life applications, such as Activity Recognition, Motion Capture and Augmented Reality, Training Robots, Motion Tracking for Consoles in the gaming industry.

Image Source

There are several approaches introduced to predict the Human Pose Estimation. These methods often identify the individual parts as the first step and then learn the connections between them to estimate the pose.

Dataset description:-

COCO Keypoints challengeMPII Human Pose DatasetVGG Pose Dataset

The MPII Human Pose Dataset dataset includes 25,000+ images containing more than 40,000+ people with annotated body joints. The images were collected systematically using an established taxonomy of everyday human activities. There are a total of 410 human activities, and each image is provided with an activity label.

Tech Stack:- Python, OpenCV, Numpy, Skealrn, Matplotlib, TensorFlow

Key learnings from the project:-

Visualising normal embedding using TSNEClassify and label persons’ faces and poses in imagesUnivariate and Bivariate analysis to understand the DataApplying OpenCV algorithm#5 Self-Driving Car

Business Objective:-

Elon Musk, the founder of Tesla (self-driving car company), once said, “Autonomous cars are no longer beholden to Hollywood sci-fi films”.

According to a report, approximately 1.3 million people die each year as a result of road traffic crashes, which is almost 3,287 a day! And if we account for the injury counts as well, it is roughly around 20–50 million.

The root cause of these accidents?

Ans. Human error.

The most common reasons are distracted, drunk, reckless driving. Can we neutralise human error from the equation?

Image Source

Tech Stack

Python, matplotlib, Keras, NumPy, pandas, scikit-learn, OpenCV, TensorFlow, CNN, Pytorch, Google Colab, GPU/TPU

Datasets description-

Berkeley DeepDrive BDD100kBaidu ApolloscapesComma.aiOxford’s Robotic CarnuScenes Dataset

nuScenes is a public;y available large-scale dataset for autonomous driving.

It has 1,400,000 camera images, detailed map images, 390,000 lidar sweeps, and complete sensor suites such as 1x LIDAR, 5x RADAR, 6x camera, IMU, GPS, manual annotations for 23 object classes, and others.

Key learnings from the project:-

Advanced Object detection using OpenCV and CNNScalable distributed training and performance optimisation using PytorchTraining a model using Google Colab GPUReal-time capturing of the image using Pytorch ImageLoaderInteresting Deep Learning Projects for Intermediate Professionals

Working professionals who enjoy taking steps to upgrade their skillset will find this section worth their time as it has simple deep learning projects ideas that introduce recent and popular algroithms.

#1 Language Translator using Deep Learning

Business Objective: Have you ever been to a new place and struggled to understand the local language? I bet you have used Google Translator at least once to try and mimic the local language and accent.

One of the very popular subfields of computational linguistics, Machine Translation (MT), focuses on translation from one language to another. With the increasing popularity and efficiency of deep learning, NMT (Neural Machine Translation) has become the most efficient algorithm to carry out this task. We all have used Google Translator, which is the leading industry example of Machine translation.

One of the prominent versions of NMT is the Encoder-Decoder structure. The architecture combines two recurrent neural networks (RNNs) used together to create a translation model.

The main objective of an NMT model is to take a text input in any language as an input and translate it into a different language as output.

Datasets description:-

Many things dataset — dataset of numerous languages and its translation to EnglishStatistical Machine Translation Workshop datasetKaggle Hindi English Corpora

The Statistical Machine Translation Workshop dataset consists of various dataset files for different languages like Bulgarian, Danish, German, French, Greek, Spanish, etc.

All the files are in archive format consisting of two sets of files, one being English and another language for which we want the translation.

For Example “fr-en.tgz”, the file is for French to English translation. You can unzip this archive file using the tar command or simply using zip file extraction software.

There will be two sets of folders each with more than 2 lakhs sentences in English and French language.

English: europarl-v7.fr-en.en (288M)

French: europarl-v7.fr-en.fr (331M)

Tech Stack

Python, NLP, LSTM, Numpy, Keras, TensorFlow

Recommended Reading: 15 NLP Projects Ideas for Beginners With Source Code for 2021

Key learnings from the project

Fundamentals of Machine TranslationUsing NLP for preprocessing, cleaning text dataTokenisation of texts and words using NLPAn understanding of working implementation of state-of-the-art algorithms like LSTM and RNN#2 Chatbot

Business objective: Chatbots are not new to all of us. They are widely used as virtual assistants by various businesses for addressing their customer experience touchpoints. Chatbots can understand what humans are referring to and can guide them to provide the desired result.

In the current fast-paced life, we all expect answers immediately, and we expect them to be accurate at the same time. The Chatbots can effortlessly address this with almost none or very minimal human intervention.

Image source

Datasets:-

Kaggle Deep NLP Chatbot datasetYahoo Language DataSQUAD — It consists of questions posed by people on Wikipedia articles

The Yahoo Language Data is a curated dataset, especially in question-answer format, which is very easy for our practice.

Tech Stack

Python, TensorFlow, nltk, NumPy, scikit_learn, Spacy, TextBlob, Word2Vec, RNN

Key learnings from the project

1. Data pre-processing and parse trees of the chats to understand the text flow.

2. Generate own words using word vectors with the help of Word2Vec model

3. Using TensorFlow to create the Seq2Seq model

4. Understand the working of RNN algorithms and using generative-based models to generate new text.

#3 Virtual Assistant

Business Objective: “JARVIS? I need your help” does that sound familiar? Yes, I am talking about Iron Man’s AI assistant.

Image Source

Honestly, if you ask me, what’s the coolest part of the Iron Man movies? The elaborate action sequences? The dream team-ups with Avengers? Tony Stark’s wit? No. For me, it is J.A.R.V.I.S.

I always feel I should have an assistant like J.A.R.V.I.S whom I can command.

Virtual assistants like Alexa, Siri, Google Now are commonly used now to perform

simple tasks like “Switch off the lights”, “Play a song from the XYZ movie”, “What’s the weather”, etc.

Dataset description:-

Coached Conversational Preference Elicitation(CCPE)Taskmaster-1

The CCPE dataset comprises 12,000 “annotated utterances” of 502 dialogues recorded between a user and an assistant from a movie preferences discussion.

Each conversation has the following fields:

conversation ID: A unique random ID for the conversation. The ID has no meaning.

utterances: It is an array of the utterances generated by the workers.

Each utterance has the following fields:

index: An index that indicates the order of the utterances in the conversation.

speaker: User or Assistant Bot to show who generated the utterance.

text: The raw text transcribed from the user’s spoken recording.

segments: It is an array of semantic annotations of spans in the text.

Tech stack — Python, NLTK, NLP, NLU, Google text-to-speech and Speech Recognition module, RNN for text generation

Key learnings from the project -

Speech recognition system for converting speech input to text using Google librariesImplementation of APIs, which are used to allow two or more applications to share data among them.Content extraction from voice using NLP, Natural Language Understanding(NLU)#4 Auto text completion and generation with Deep learning

Business objective:- When I began typing the title of this article, “Auto text completion and generation with De….”, Google Docs began automatically completing my sentences. In this case, it accurately suggested Deep Learning!

If you’ve ever used Google search or got recommendations while composing an email, this wouldn’t surprise you because predictive text generation has been in use for a very long time now and is used widely across industries.

Dataset Description:-

New York Times CommentsAlice WonderlandShakespeare dataset

The New York Times Comments data contains information about the comments on the articles published in New York Times in Jan-May 2017 and Jan-April 2018.

The CSV files contain over 2 million comments and 34 features. The articles include 16 features and about more than 9,000 articles.

Tech Stack

Python, LSTM, Keras, Matplotlib, Numpy,TensorFlow,labml

Key learnings from the project:-

Auto text-generation using LSTMBuilding and training model using TensorFlow and KerasHandling typo mistakes using LSTM by preserving the errors that can be back-propagated through time and layers.


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有