Peng Zhang’s studies have taken him from China to Ireland, where he is now trying to improve AI tech for the smart factories of the future.
Peng Zhang studied mechanical engineering at the Harbin Institute of Technology in China, before going to France and completing a master’s degree in advanced robotics.
He came to Ireland for his PhD research in 2020 and is now based at Technological University of the Shannon in Athlone. His work is funded by Confirm, the Science Foundation Ireland research centre for smart manufacturing.
‘I am working on the optimisation of distributed deep neural network deployment in edge environments’
– PENG ZHANG
Tell us about the research you’re currently working on.
My research work is related to smart manufacturing. Industrial manufacturing has entered a new stage with the rapid development of the internet of things and 5G technology.
However, the industrial environment is different from the cloud environment. It is called the fog or edge environment. Compared to cloud computing, edge computing devices are closer to the factory. This can drastically reduce latency caused by data transfers between computing devices and the factory. At the same time, it can improve the protection of data privacy.
I am working on the optimisation of distributed deep neural network deployment in edge environments. Currently, I am doing research on pipeline parallelism, which is an enhancement of model parallelism. Model parallelism addresses the problem that a large deep neural network model cannot be stored in a single device such as a GPU.
My research includes analysing the performance of pipeline parallelism under different settings and proposing the methods to improve the pipeline parallelism performance.
In your opinion, why is your research important?
As we all know, manufacturing environments require stringent response times to ensure the correctness of the manufacturing process. And moving the decision functions from the cloud to the edge is a promising scheme.
However, as edge devices are characterised by tight resource constraints with dynamic and heterogeneous natures, training deep learning models or deploying trained models over these distributed and heterogeneous resources remains an open challenge. And what I am doing is developing a solution to these challenges.
What inspired you to become a researcher in this area?
Based on my undergraduate and master’s studies in the field of machinery, especially robotics, I developed a strong interest in industrial robots and humanoid robots.
At the same time, my master’s studies allowed me to get in touch with the research direction of artificial intelligence. And intelligent algorithms and systems are of great significance in robotics. Hence, I chose to pursue research in computer science to improve my understanding of this field.
What are some of the biggest challenges or misconceptions you face as a researcher in your field?
The biggest challenge that I have met is the Covid-19 pandemic. I started my research at the beginning of 2020 and we had to be isolated at home during that year. Hence, it was a challenge for me to balance research work and personal life during that time.
In terms of my research work, the heterogeneous edge environment is another challenge. Heterogeneous refers to the different set-ups in the edge environment. For instance, the different types of accelerators, different kinds of models, etc. This heterogeneous edge environment makes deployment in edge environments a big challenge.
Do you think public engagement with science has changed in recent years?
Yes, I think public engagement with science changed during the pandemic. People now pay more attention to technological developments.
Additionally, the balance between technological development and nature is becoming a topic of increasing concern.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.