Interview: Anusua Trivedi, Data Scientist on Microsoft’s Advanced Data Science & Strategic Initiatives Team

https://archive.org/download/insideAI NewsPodcast/insideAI NewsPodcast.mp3

In this podcast interview, I caught up with Anusua Trivedi, a Data Scientist on Microsoft’s ADS team, to get her take on the upward trajectory of AI and deep learning that we’re seeing in the industry today. She works on developing advanced Deep Learning models & AI solutions for Microsoft’s clients and partners. She’s an advanced trainer and conducts hands-on deep learning labs. She also works closely with the MSR CNTK team and has been training customers on CNTK. Prior to joining Microsoft, Anusua has held positions with UT Austin and University of Utah. Anusua is a frequent speaker at machine learning and AI conferences.

Daniel – Managing Editor, insideAI News

insideAI News: Welcome to the insideAI News podcast for today. My name is Daniel Gutierrez and I’m the resident data scientist and managing editor for insideAI News. Today we have a guest from Microsoft, Anusua Trivedi, who is a data scientist on Microsoft’s Advanced Data Science & Strategic Initiatives team. I have a series of questions we can go through so why don’t we get started.

Anusua Trivedi: Sure. Thank you so much, Daniel.

insideAI News: Great to have you! Let’s start with something simple here. Why don’t you give us a little background, your own personal background, and tell us what brought you to Microsoft.

Anusua Trivedi: Sure. Thank you so much for giving this opportunity to interview with you guys today. My academic interest was and always has been around data. I was always intrigued by data mining and database techniques. So that actually brought me to do my master’s in computer science at University of Utah. However, my academic interests shifted from data mining to machine learning when I took the machine learning class at University of Utah in the first semester.

That love actually grew into obsession and I started publishing papers on machine learning while doing my master’s and slowly got into applying machine learning with medical data. Now, that actually opened up a whole new world for me. I’ve been very fortunate that I’ve been able to jump straight from grad school into a data science career. I started working with the State of Utah on their education data. While working in Utah, I also was very interested in bioinformatics, all the genetic sequences, genetic modeling, and all those related problems. So I joined the bioinformatics department at the University of Utah and started exploring how we can actually help patients by applying machine learning techniques for gene analysis.

My husband moved to UT Austin with a new job, and I moved with him to Austin. I applied as a biodata scientist for UT Austin under this computer center called TACC, Texas Advanced Computing Center and I started working on deep learning models for the first time on medical data for the same gene sequence analysis. This was really, really eye-opening, and I got to see how deep learning was way more powerful than traditional machine learning approaches.

Eventually, while I was speaking at one of the conferences, PyData in Seattle, I was approached by Microsoft to try an interview in their advanced data team. I interviewed and really liked the job description. I gladly took up the data science position. So as you can see, none of these things where I am today was premeditated for me, but I’m really very happy looking back at the choices that I made that has helped me end up here today.

insideAI News: Wow. That’s a great background that you have for what’s become such a hot industry right now with AI, machine learning, deep learning. So good show with that. The timing is great. Now, let’s talk a little bit about what your primary work responsibilities are, and tell us a little something about your current projects.

Anusua Trivedi: Sure Daniel. I work in a customer-facing team. So my day-to-day interaction involves lots of customers, different problems scenarios. So every day is a new challenge, and I love it. We have seen a huge growth of AI related projects over the last few years. So all the customers which we had trying out machine learning, they’re all trying to move into the deep learning realm. I usually train customers on the latest AI technologies, and I will also work with customers to double up POCs and help them come up with solutions to their AI problems. The most talked about AI products in Microsoft last year has been the cognitive services in the Bot Framework. We have seen huge numbers of our customer base who wants to come and start building smart bots.

For example, we worked with one of our partners, PCS, and helped them build a smart bot application called Ventura for Singapore Airlines. So now if somebody is coming in Singapore Airlines, they do not need to speak only in English. The Ventara application supports four different dialects of Mandarin and the bot can actually convert everything from Mandarin to English, take care of your booking, also make the booking of flights, all of this cool stuff that you can imagine it can do. So that one is very successful, and we continued doing that with different partners like Accenture and ENY.

Our team basically developed the smart component of smart bots. They use some technological deep neural networks, which is basically what deep learning is all about. Microsoft cognitive tool kit or CNTK is an open source deep learning framework from Microsoft. It’s one of the leading open source deep learning framework, which helps us train DNNs and make AI models very easily. Currently, at this point of time, I’m working on some computer vision problems in retail space with one of our customers. What we are trying to do is basically trying to recognize the most visually similar clothes. When I say visually similar, it’s not about, shirt-to-shirt comparison, but comparable to the fine grain detail of what granularity: what the pattern of the shirts are, what the texture of the shirts are, what the neckline, hem length, sleeve lengths of the shirts are. So the AI system that we are building, can go down into such fine grain detail and bring up the most similar shirt that you are searching for. One application of this, suppose you like your friend’s shirt. You take a photograph of it in your phone, and you try to say, “Hey, where can I buy this shirt?'” So this app with the AI model, can actually look up the most similar one and say, “Hey, maybe you can buy it in this at Amazon or Wal-Mart.” So all this is possible because of AI today.

insideAI News:  Well, I think you’re working on some pretty fascinating stuff, and some of the things you mentioned intrigue me. Maybe you can describe the technologies, frameworks, and tools that you use in your work a little bit more?

Anusua Trivedi: Yeah. Sure. So we are building out Azure as the first AI supercomputer. I mean, that’s the vision. We are trying to democratize AI as much as possible. So Microsoft Azure has a huge number of analytics tools already existing. Something like Azure Machine Learning Studio, streaming analytics, cognitive services, which I just talked about, data analytics, Azure bot services, and these are just a few to start with. There are so many more coming out and planned. We are trying to build a cloud processing power and not just a traditional CPU-based architecture, but also bringing in GPUs. So GPUs are a very powerful tool, I would say, which actually helps you increase your processing power. The success of deep learning is dependent on two major components. One of them is Big Data. Other one is huge computation power.

With the advent of this whole Big Data era, we are actually getting large amounts of labeled data from companies and to crunch them we now have GPU computation power in our Azure cloud, which helps us go through these huge volumes of data much more easily and much faster. There are many open frameworks that we use along with some of the frameworks which I’ve mentioned that we already have in cloud. Some of the open frameworks we use are our own computational toolkit of CNTK, Google’s TensorFlow, Cafe, MxNet. All these are deep learning frameworks and we use them seamlessly in Azure. The power of using Azure actually is not based only on the frameworks of the toolkits, but a combination of the whole end-to-end title. So Azure gives us the power to actually ingest large volumes of data, store it in our databases, do cleaning using all our analytics tools, staging the data properly, applying deep neural networks through all these open frameworks that I talked about, and get the visualizations out very easily. So with Azure and with all of our tools, as well as with Azure supporting all open toolkits, we are actually trying to build a full end-to-end story here.

insideAI News: That’s great! Thank you for that rundown on all that Microsoft is doing in this space. It sounds very significant. Let’s shift gears here with a more philosophical question. So we’re in sort of a “hyper-hyped” environment right now with respect to AI and deep learning. What are your observations about where it’s heading, and what do you think about the so-called “killer AI” mystique we keep reading about in the mainstream press?

Anusua Trivedi:  Yeah. There’ve been lots of articles and lots of interviews on that, right? I would say artificial intelligence is not a myth anymore. We are seeing it in our everyday life. It’s not a futuristic concept, but we are experiencing it every day. We see it in the speech recognition power of our phone. I mean, you can talk to or you can just say, “Open Google,” and tell it to open a map. You do not say, “Hey, I want to travel here.” The map applications put the starting position as your current location and you just talk to it. Your phone application understands you. So this is a huge win in the speech recognition. For image search, you can just type something like, “Give me all these related car images,” and it will actually try, and it’s pretty smart. I mean, Bing can start pulling in pretty good images based on what you’re providing in the search query.

Question answering bots are very, very common. Our customers are actually building them every day using our components. So there’s a massive boom in everyday use of AI that we’re seeing, self-driving cars to AI gaming. And I would say we at Microsoft are driving an AI-driven vision that has been started by our CEO. And AI is going to get more and more sophisticated from here onward with improved speech, voice, image, video recognition, etc. It will completely change the way we interact with our everyday devices. I know that there has been a notion about this killer AI mystique. I would say that we are not yet there to comment concretely on what killer AI people are concerned about, because I just do not see us there yet. I do not see it being there yet as AI, as a human replacement, something like that.

However, a genuine concern, which I see growing, and I think that’s absolutely true, is a lot of people have a fear about losing their jobs. The thing is, over the history of time, automation has consistently impacted jobs. Well, you know the heavy machinery industry came in and people who used to work in a small shop were replaced by heavy machinery. That has also impacted jobs at that point of time. But with automation comes the progress of society, right? So definitely, we need to start thinking at this point of time and study how our AI, this whole revolution of AI, is going to impact our labor force. And we need to take some planned steps to educate everybody around allowing them to get the proper skill set easily so they can move on with their everyday work. Microsoft is already playing a leadership role in this by not only inventing AI but also democratizing AI and also educating people in AI and providing them all with open-source resources from which they can start learning very easily.

insideAI News: Fascinating. I appreciate your insights here. And we’re hearing so much being talked about these days with respect to the positive forces with AI and deep learning and so forth, but there are challenges. Can you say a few words about the main challenges you experience with AI and deep learning?

Anusua Trivedi: Yes. Absolutely. There are challenges. The main challenge is recognizing the correct problem scope. So basically, we are already seeing benefits of AI. So, for example, one of our clients is automating, and detecting visually similar clothes; another is building smart refrigerators that suggest recipes; another is building smart drones to maintain power lines; and many such other projects. But extending this work, we have spent months of time to double up working models for these solutions. The problem with AI is basically, it’s very, very domain-specific. So if you develop for one domain, it will not necessarily work at other domain. The main reason? It’s very much data-dependent. We need very domain-specific data and instructions and we need to modify that domain specific data and apply the trained model to it. Which means a lot of technological things like parameter tuning, etc.

Essentially, to sum it up, it’s training again from scratch. And training takes a long time. It can take weeks to months, depending on how much data you’re crunching and how much computation power you have. We have tried to bring it down by using our Azure cloud. We have tried to bring down the training process subsequently. But it’s very difficult to just say that, “Hey, I run my model for one hour on a very new data set.” Even though I have a trained model, it does not guarantee that it will just work out of the box. So creating an out of the box solution using AI is still difficult, and lot of research is still happening around that. Another problem of AI is multitasking. It’s like when you teach your kids to go through alphabets and then they spell out words, then they can spell out sentences. AI is still at that stage. It still cannot do full tasks. So it learns in stages. It’s not there yet, where once you have trained one AI you will just expect it to get context-based relation to other problem and start working on different subsets of problems that you have trained it on. So multitasking is still a problem.

As I said, the third big problem is AI still cannot take into account the context of a problem. So, for example, if you are building up a question answering system, it’s very difficult to get– if the system has learned only about humans, and if you start conversing about jars suddenly, like jam jars or something, it cannot catch up to the context that you have shifted from humans to jam jars. And it still tries to apply whatever it has learned like humans have hands. So it still tries to apply that kind of prior learning to the jars. So that becomes way out of context and that actually pretty much ruins the whole system. Lots of research, again, is happening on that account. And we still are not there yet, but I hope we would be there very soon.

insideAI News: That’s great. I appreciate that overview of the challenges you’re seeing. But the good thing, they don’t sound too insurmountable. So I’m looking forward for the next couple of years as AI and deep learning continue to make progress, so. And with that, that’s my last question. So I’d like to thank Anusua Trivedi of Microsoft for giving us all her great insights into what’s driving AI and deep learning. Thank you.

Anusua Trivedi: Thank you so much, Dan. It was pleasure.

Download the MP3