USTC, Hefei, 2018
Previously: at West Lake, Hangzhou.

Xumiao Zhang
Ph.D. student at University of Michigan

4917 BBB Building
2260 Hayward Street
University of Michigan, Ann Arbor
Ann Arbor, MI 48109

Email: xumiao [at]

I am a Ph.D. student in Computer Science and Engineering at the University of Michigan, Ann Arbor, advised by Prof. Z. Morley Mao. My research interests include: wireless/cellular network measurement and systems, mobile computing, and cutting-edge mobile applications.

Prior to UMich, I received my B.E. from the University of Science and Technology of China, under the supervision of Prof. Xiangyang Li. I was in School of the Gifted Young and was enrolled in the Talent Program in Computer and Information Science and Technology.

You can find my CV here.

What's New

May 2019
Serving on the Shadow Program Committee for ACM IMC 2019.
February 2019
I was awarded ACM HotMobile 2019 Travel Grant.
August 2018
I moved to Ann Arbor and started my Ph.D. life at UMich!
July 2017
I started my internship supervised by Prof. Chunyi Peng, in Mobile System, Security and Networking (MSSN) Lab at the Ohio State University.
June 2017
My first paper (demo) was accepted to ACM MobiCom'17: The Sound of Silence: End-to-End Sign Language Recognition Using SmartWatch.
September 2015
I was enrolled in the Talent Program in Computer and Information Science & Technology, USTC.
August 2014
I started my college life at University of Science and Technology of China.

Research Projects


MobileInsight [MobiCom '16] is a cross-platform software package for cellular network monitoring and analysis on end device. It is a tool which collects, analyzes and exploits runtime network information from operational cellular networks.

The Sound of
The Sound of Silence is a portable SmartWatch-based American sign language (ASL) recognition system. This system is based on the intuitive idea that each sign has its specific motion pattern which can be transformed into unique signals and then analyzed by neural networks.



Poster: MobiQuery: a search engine for understanding mobile network performance PAM '19

Shichang Xu, Xumiao Zhang, Jiachen Sun, Xiao Zhu, Z. Morley Mao
Proceedings of the 20th Annual International Conference on Passive and Active Network Measurement (PAM '19)

Understanding the availability and performance of cellular networks perceived by mobile devices is essential for a wide range of use cases. For example, network operators rely on such information to troubleshoot performance degradations. App developers require such information to diagnose Quality of Experience (QoE) issues and improve app design. Towards this goal, mobile measurement platforms such as Mobilyzer [1] have been developed. Mobilzyer leverages crowdsourcing to collect measurements and share data across different users to facilitate various use cases. However, it still requires significant effort for users to answer even simple questions, such as, “what is the 95th percentile latency on Android devices in New York with AT&T network to Akamai servers”. Platform users need to first analyze available measurement data to understand whether their questions can be answered. If not, they need to make plans to schedule proper measurement tasks according to their questions on available devices to collect relevant data. After collecting the measurement results, they need to judiciously process the data to get meaningful results. This process significantly reduces the efficiency for various entities to analyze interesting network phenomena. In this work, we propose MobiQuery, a public search engine atop Mobilyzer for understanding mobile network performance. It is designed around two principles. (1) Providing easy-to-use but powerful user interface: users can easily express network performance of interest for different use cases. To be specific, MobiQuery supports queries on different network metrics (e.g. latency and throughput) with a different context (location, time, network type, signal strength, and network operators etc.). (2) Reducing measurement overhead: MobiQuery performs optimizations to minimize the resource usage of network measurements to fulfill the query. As network and battery resources to conduct measurements from mobile devices are scarce, the measurement tasks need to be carefully scheduled to minimize the resource overhead on user devices. Upon receiving queries, MobiQuery analyzes collected data using previous measurements and checks whether the measurement results are sufficient to answer the query with high confidence. If not, it intelligently generates corresponding measurement tasks and schedules the tasks to available devices. The scheduler leverages information such as current device context to maximize measurement success rate to capture phenomena of interest. It also considers device-specific resource constraints and minimizes overall resource overhead. Furthermore, redundant measurement tasks among multiple queries are avoided. Compared with existing mobile measurement platforms, MobiQuery can effectively help understand mobile network performance with minimal user effort and low measurement overhead.

Demo: The Sound of Silence: End-to-End Sign Language Recognition Using SmartWatch MobiCom '17

Qian Dai, Jiahui Hou, Panlong Yang, Xiangyang Li, Fei Wang, Xumiao Zhang
Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking (MobiCom '17)

Sign Language is a natural and fully-fledged communication method for deaf and hearing-impaired people. In this demo, we propose the first SmartWatch-based American sign language (ASL) recognition system, which is more comfortable, portable and user-friendly and offers accessibility anytime, anywhere. This system is based on the intuitive idea that each sign has its specific motion pattern which can be transformed into unique gyroscope and accelerometer signals and then analyzed and learned by using Long-Short term memory recurrent neural network (LSTM-RNN) trained with connectionist temporal classification (CTC). In this way, signs and context information can be correctly recognized based on an off-the-shelf device (e.g. SmartWatch, Smartphone). The experiments show that, in the Known user split task, our system reaches an average word error rate of 7.29% to recognize 73 sentences formed by 103 ASL signs and achieves detection ratio up to 93.7% for a single sign. The result also shows our system has a good adaptation, even including new users, it can achieve an average word error rate of 21.6% at the sentence level and reach an average detection ratio of 79.4%. Moreover, our system performs real time ASL translation, outputting the speech within 1.69 seconds for a sentence of 12 signs in average.
@inproceedings{Dai:2017:DSS:3117811.3119853, author = {Dai, Qian and Hou, Jiahui and Yang, Panlong and Li, Xiangyang and Wang, Fei and Zhang, Xumiao}, title = {Demo: The Sound of Silence: End-to-End Sign Language Recognition Using SmartWatch}, booktitle = {Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking}, series = {MobiCom '17}, year = {2017}, isbn = {978-1-4503-4916-1}, location = {Snowbird, Utah, USA}, pages = {462--464}, numpages = {3}, url = {}, doi = {10.1145/3117811.3119853}, acmid = {3119853}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {activity recognition, mobile sensing, wearable computing}, }


Fall 2017
Fundamentals of Database Systems


February 2019
Student Travel Grant, ACM HotMobile 2019
April 2018
Outstanding Graduate, Provincial Department of Education of Anhui
2014 – 2017
Outstanding Student Scholarship, USTC (for 4 consecutive years)
January 2017
Honorable Mention of Mathematical Contest in Modeling 2017 (MCM)
June 2016
Second Prize of USTC Electronic Design Contest, Institute of Electronics, CAS
October 2015
Outstanding Student Leadership, USTC
May 2015
Social Responsibility Scholarship, USTC
September 2014
Outstanding Freshman Scholarship, USTC

Academic Activities


Please contact me via email: base64 -D <<< "eHVtaWFvQHVtaWNoLmVkdQ=="

You are the No. Web Counters th vistor of my homepage.