Hi,I am Omkar Thawakar and welcome to my portfolio.
Omkar Thawakar
Researcher, Programmer, Coder, Developer, Artist ......
[Feb 2024] | MobiLlama Released. more |
[Feb 2024] | 1 Paper Accepted to CVPR-24 more |
[August 2023] | Joined PhD at MBZUAI under Prof Fahad Khan. more |
[July 2023] | 1 Paper Accepted to MICCAI-23 more |
[June 2023] | 1 Paper Accepted to CAIP-23 more |
[Dec 2022] | Volunteer NeurIPS-2022 Workshop on Vision Transformers. more |
[Dec 2022] | Volunteer ACCV-2022 Workshop on Vision Transformers. more |
[Nov 2022] | Reviewer for CVPR-2023, WACV-2022. |
[Oct 2022] | Presented MSSTS at ECCV-2022, Tel-Aviv visit |
[Jul 2022] | MSSTS accepted at ECCV 2022. |
[Sept 2021] | Joined MBZUAI CV Lab under Dr Fahad Khan. more . |
Advisors: Prof Fahad Khan, Dr Salman Khan
"Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development. However, LLMs do not suit well for scenarios that require on-device processing, energy efficiency, low memory footprint, and response efficiency. These requisites are crucial for privacy, security, and sustainable deployment. This paper explores the "less is more" paradigm by addressing the challenge of designing accurate yet efficient Small Language Models (SLMs) for resource constrained devices. Our primary contribution is the introduction of an accurate and fully transparent open-source 0.5 billion (0.5B) parameter SLM, named MobiLlama, catering to the specific needs of resource-constrained computing with an emphasis on enhanced performance with reduced resource demands. MobiLlama is a SLM design that initiates from a larger model and applies a careful parameter sharing scheme to reduce both the pre-training and the deployment cost. Our work strives to not only bridge the gap in open-source SLMs but also ensures full transparency, where complete training data pipeline, training code, model weights, and over 300 checkpoints along with evaluation codes is available at : https://github.com/mbzuai-oryx/MobiLlama.
This project is designed and developed in CVPR Lab, IIT Ropar, India.
Guide : Dr. Subrahmanyam Murala
Sponsored By : Yamaha Labs, India.
In automatomibe industry, engine assembly is considered as one of the crucial part. During engine assenbly various parts assembled together to complate desired two stroke engine. Piston is a part which responsible for the production of stroke which prodce mechanical energy for acceleration. Piston contains various rings depending on different types of engines. During engine assembly, piston without rings can leads to the faulty engine. Due to this it is mandetory to take care of piston rings during engine assembly. Our project aims the real time detection of the rings in piston. Our objective is to develope a small device which real time detect piston rings. We consider the fact that the cost of our device should be less than 20,000 INR.
Reinforcement Learning (RL) is a subfield of Machine Learning where an agent learns by interacting with its environment, observing the results of these interactions and receiving a reward (positive or negative) accordingly. This way of learning mimics the fundamental way in which we humans (and animals alike) learn. Currently, Machines are not as intelligent as humans in terms of knowledge extraction and new discoveries with using previous learned knowledge. Reinforcement Training is the solution for training of Robots in real world like we humans does right from our childhood. In this project I used Reinforcement approach (Q Learning) to train the Robot so that it should follow line precisely without being explicitely programmed for it.
NVIDIA® DGX Station™ is the world’s fastest workstation for leading-edge AI development. This fully integrated and optimized system enables your team to get started faster and effortlessly experiment with the power of a data center in your office. For more info visit AI Workstation for Data Science Teams .Following article is about accessing NVIDIA DGX Station remotely from MAC with simple ssh and X11.
The backpropagation algorithm was originally introduced in the 1970s, but its importance wasn’t fully appreciated until a famous paper in 1986 by David Rumelhart, Geoffrey Hinton, and Ronald Williams. The paper describes several neural networks where backpropagation works far faster than earlier approaches to learning, making it possible to use neural nets to solve problems which had previously been insoluble. Today, the backpropagation algorithm is the workhorse of learning in neural networks. Although Backpropagation is the widely used and most successful algorithm for the training of a neural network of all time, there are several factors which affect the Error-Backpropagation training algorithms.