Souvik Kundu is a fifth year PhD student in Professor Peter Beerel’s Energy Efficient Secure Sustainable Computing Group (EESSC) and co-advised by Professor Massoud Pedram. Earlier this year, Souvik achieved the rare feat of being the first author on papers accepted to both the International Conference on Computer Vision (ICCV) and Neural Information and Processing Systems (NeurIPS) in the same year. These conferences are two of the most prestigious events in the machine learning community. Below, Kundu shares some of the details of his research in computer vision and computer architecture and why AI research is so important to the world right now.
AI and Machine Learning are quickly becoming essential to modern society. From your perspective as a researcher, what makes these technologies so important, and what are the major challenges researchers face?
Yes, absolutely! Artificial intelligence (AI) and machine learning (ML) enabled devices are becoming omnipresent in our daily life, including healthcare, financial sectors, entertainment, and transportation. It is fascinating to see how AI has augmented human intelligence and enabling us to have a technology-driven future. For example, the drug discovery domain is on the verge of taking a bold and promising step forward with the help of AI. At the same time, level 4 self-driving autonomous vehicle is no more a dream. There have also been tremendous efforts to use AI in combating climate change. In fact, many of today’s AI-driven applications are safety-critical, meaning they are absolutely necessary for the safety and well-being of modern society.
We are now finally coming to understand just how important it is that the researchers and engineers behind these technologies act and think responsibly about society as a whole. AI researchers should be knowledgeable enough to red-flag possible misuse of such an immensely powerful tool. That’s why being able to analyze, understand, improve, and responsibly innovate the underlying technology is more important than ever before. Let me mention just three of many areas where we face big challenges with AI-enabled applications.
The first is privacy. Guaranteeing user privacy while maintaining the performance of the ML models that rely on huge amounts of data is a big challenge. Another growing concern with the AI models is that they are getting bigger every day, which can have a significant impact on energy consumption and carbon emission. People don’t usually think of AI/ML as being bad for the environment, but the fact is that in many cases it is! Finally, the ML models sometimes become vulnerable to noisy and unseen data whose distribution may be different from the one on which they are trained. This can bring in serious concerns and life-threatening consequences for the model deployment in safety-critical applications.
Your research is specifically in the areas of computer vision and computer architecture. Can you explain a bit more about what these fields are exactly?
Yes, primarily my research focuses on algorithm-architecture co-design for energy-efficient and robust machine learning models for computer vision applications. Computer vision (CV) is the division of research that mainly focuses on representation, processing, understanding, and applications of various vision tasks including image classification, object detection, segmentation, and video tracking. Computer vision also plays a key role in the growing area of augmented and virtual reality. Current CV heavily relies on ML models to provide improved performance on these tasks. On the other hand, computer architecture deals with the functionality, organization, and implementation of computer systems where we focus on the underlying hardware design.
Why are these areas of research so vital to AI and Machine Learning?
Today it’s become harder and harder to fit an increasing number of transistors in chips with a given power-area budget. Simply put, we are reaching the limit of the computing power we can put in our machines. At the same time, we need more computing power than ever before!
I believe it is high time for algorithm and architecture research to work more closely together. This is something that is heavily stressed by my advisor in our research group. In this way we can solve the limitations of architecture with algorithmic novelty and the limitations of our algorithms with architectural innovations. This will not only open new research possibilities but will also allow us to improve computing performance in a world with reduced budgets and resources.
You recently had papers accepted to not one but two of the most prestigious conferences dealing with these areas. Can you share some of the specifics of your research and the impact your work can have?
The work accepted at ICCV is on the model robustness analysis and improvement of the brain-inspired spiking neural networks (SNNs). As model robustness is extremely important for trustworthiness in safety-critical applications, we provided a detailed analysis of the current robustness status of the extremely low-latency SNNs.
We were able to prove that, despite what was previously thought, our current best SNNs are not nearly as secure as we thought. Then, we went further and developed a novel strategies to make these networks more reliable against adversarial attacks, thereby making them more trustworthy overall.
The work accepted to NeurIPS is on model intellectual property (IP) vulnerability. With the growth of machine learning as a service (MLAAS) based business model, the demand for model IP protection is on the rise. Earlier researchers have proposed methods to protect model IP by not allowing other models to mimic the released black-box model IP’s performance. We provided a detailed analysis on up to what extent such black-box model IPs protect their confidentiality. We further went on to propose a new distillation scheme that can largely exploit the information leaking of the models and make the IP vulnerable by allowing replication of their performance. Through this research, we pose a fundamental question on the possibility of model IP protection in a knowledge distillation framework.
What are the next steps in your research?
The next big step in my research is to tackle the ongoing challenge of power consumption and efficient learning of the increasingly large ML models. The human brain can perform multiple tasks on only 20W power budget. On the other hand, a computer consumes 250W power just to do 1,000 class image classifications! Along with operation ability at such an extremely low power budget, the human brain also can generalize its learning. This generalization is extremely important for an AI/ML model to perform well on unseen data. I plan to continuously draw such inspirations from human learning and make machines learn more like we do in an efficient and privacy-preserving way.
Published on December 13th, 2021
Last updated on December 13th, 2021