I am a research assistant and PhD student in the Department of Computer Science and Engineering at the University of Michigan, Ann Arbor. I joined Prof. Scott Mahlke's CCCP research group in the fall 2016. I received my BsC degree in Summer 2016 from Sharif University of Technology.
My research interests consist of:
My CV is available here.
salar (at) umich (dot) edu
4861 BBB
2260 Hayward Street
Ann Arbor, MI 48109
Affiliations:
CCCP Research Group
University of Michigan, Dept. of CSE
During this project, we developed a deep CNN framework for smart-phones with Android OS,
and accelerated this framework by exploiting smart-phone GPGPU via Renderscript
framework. We also benchmarked our framework with well- known deep CNN based networks
such as Alex Krizhevsky’s networks for ImageNet 2012 and CIFAR-10 and LeNet-5 for MNIST dataset.
The overall procedure of deploying a CNN model on mobile platform consists of several steps:
First, the trained model which includes trained weights and parameters of different layers and the
overall architecture of model is converted to an appropriate format. Next, the mentioned converted
model files is uploaded to mobile device, where an installed application, containing our
framework runs the network offline.
View on GitHub
Download a video demo
The aim of this project is to design and develop a system, which can detect human hand gestures and recognize numbers, shown in these gestures. This project is constituted of two parts, each implemented in different hardware.
The first part is the pre-processing, which is carried out using OpenCV library in PC. This part consists of several steps:
First, an image of hand gesture is captured, using webcam.
Second, the skin tone of the hand is detected and the image is converted to binary format.
Third, with the help of Morphology and a bunch of consecutive Dilation and Erosion, edges of the hand image is smoothed.
Finally the smoothed data of binary image is transferred serially to the next part.
The second part, which is responsible for detection algorithm, is implemented in FPGA and NIOS II processor.
Our detection algorithm is based on [1], which is composed of following steps:
First, the mass center indexes of the received binary image is calculated.
Second, we draw several circles with the center of calculated indexes and different rediuses.
Next, we move over circumference of the circle and count the number of effective 0 and 1 transactions. Then, the average number of transactions over all of the drawn circles is calculated.
Finally, the recognized number, shown in the hand gesture will be equal to:(Transactions / 2) - 1
[1]: Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board.
During this project, we have developed a game similar to the Bowling, using OpenGL libraries in C++. The aim of the project was to practice different steps of transformations in image rendering pipeline such as Model, Camera and Projection Transformations.
In this game, the player should roll the bowling ball and target different holes existing at the end of lane. Each of the holes has different radius and distance.
During the gameplay, we have the ability to change the position and direction of the camera freely.
All of the objects inside of the game, has been designed and created in Blender.