Digital Society: Panel Discussion

Okay, so this was a fascinating
set of talks I have to say when we first start off this
workshop, there wasn’t, spend time organizing it and
at one point somebody said but this is so close to the valley. Nobody is gonna show up but
really it’s fascinating that it’s packed and
the agenda is packed. So I’m really thrilled that
everybody showing up and all the great talks,
really fascinating. I guess my main question is to
try to I mean really focus on the research challenges that you
see in each of your areas and in particular AI for
social good angle within those. I know you touched on
your other problems, but if we could go around and just
talk about what do you see as specific problems, problems
on AI for social good, that you think would be interesting
for people to think about? And this is going to be some of
the things that we will discuss later on as well.>>I would say good models of
human computer interaction are certainly required. Crowdsourcing most times
comes to our rescue and hear a comprehensive
presentation especially on how to evaluate leaders from
the cloud, and also how to more seamlessly integrate the crowd
into the model itself so. I mean really treat
the Cloud also as learners. So nobody gets
this annotation or creation [INAUDIBLE] for short. So give chance for integrating. So, yeah, I would say human
computer interaction. More robust models
of crowdsourcing, two-way active learning. These are very important
research problems in my mind.>>I think I just spoke, I gave
my two cents on this so but I’ll just add that I think there
is a lot of data in plain sight. There is a lot of data floating
out there that we can’t really use, and so techniques
to get insight into that. I mean we for example have not even got
into the text processing yet, we’re just beginning
to get into it. So there is so
much more information for example you could capture about
political actors if you could mine all of the news
articles about them. We haven’t even started that yet
but hopefully that will be an
interesting area going forward.>>Let me take a slightly
broader view, so, not just for the digital society or for
any of the previous sessions. In general, I think AI
gets started when somebody has created an important and
useful dataset. Before that AI researcher
cannot even get involved, so I see of the AI technology as an
enabler as a part of the process for any social good application. Be it agriculture, be it
healthcare, be it education, be it legal, be it political. Somebody has to put together
the important asset, usually that’s
the domain expert. Usually it is not the AI person,
in some cases they can be an anomaly, but usually it’s
the dominant expert which for whatever reasons have put
together this dataset. For example in our case of
CAT scans this hospital approached us. They said we are the first rural
hospital in the country and we have been documenting
all the CAT scans and MRIs that have been taken
in our facility, including the radiologists report,
can you do something with it? That is the first step. I as an AI person will not go
easily to each hospital and say okay let’s start
maintaining this data set, so that’s the first thing. The second thing that we need is
for the domain expert to tell us what are the important problems
that they are interested in. What do they want this data
set to reveal to them? They may not know how to
solve those problems, it may not be easy for them to speak those problems in
a language that we understand. They may not even
know the scope, but in a communication agency,
in HCA and business also we often say know
your user, talk to the user, have user interviews so that you
understand what is it they need. In this kind of
interdisciplinary research I believe it is job of the AI
person, the AI researcher, to talk to this human expert and
let them fantasize about what kind of world they can envision
in the future in their domain. Because then through that, we
will be able to identify nuggets of tasks which can be something
that either become research problems or existing solutions
can be adapted to them easily. And I think the third most
important thing is the life cycle of it. The AI system will use it, might
be able to come up with insights based on passive data analysis
or it might be able to come up with an active system which
can interact with the domain of the user and give regular
insights, but then what? Then it needs to go into that
set up, into that domain so if someday we are able to
figure out how to hit our scans automatically using
deep learning, then what do we do with it? Well, we have to go
the radiologists. We have to talk to them,
we have to say okay, these are the insight in
our system [INAUDIBLE], what do you need? What is that user interface and
of course, they don’t want the Deep Learning System to be
doing their work, but here AI system will be completely
wrong many of the times. So what part of the AI
system can we deploy so that the radiologists can
actually benefit from it and save their time? So I think there is, I really
think of myself as a very small part of the big picture, and in this big picture there
is an important domain. Somebody has the foresight or
the insight to collect the data, somebody approaches us or
we approach them and we get access to the data, then
some of the magic gets done with the AI algorithms and
new algorithms come in. And then they go back
to them say say, look this is what
we have have found, let us start communication and
let’s have this look. So my experience has been so far in the four years in
India is that there are so many people really excited
about using AI technology. Very few of them are collecting
data at a high quality and even far fewer of them
are willing to share this data. And the initial conversation
started in the end, so I feel that there are more
logistical challenges. I can’t tell you this research
problem the AI needs to solve because many of the times
the research problems are indicated by the data
sets themselves. When you have
a certain data set, you try to apply an existing
algorithm you find that algorithm does not work. The algorithm does not work because it doesn’t have
the special characteristics. Now, you start modeling
the special characters. So, most of the time, the AI
challenges begin with modeling and then lead to algorithms. So ultimately, with
the modeling, it is hard for me, at least, from my vantage point
as an applied researcher to say what are the research
challenges. But having this communication
and encouraging and facilitating this communication, and having
teams of domain experts and AI scientists come together,
I think that is what is going to really make a significant
difference.>>So Massam I made a mistake
in sitting to your right because you stole everything
I was gonna say. Almost, not quite but I was just gonna add to what
you were saying which is that we have a handful of
engineers in the audience. I think if anything the best way
to answer this question is to ask them and I am hoping that
if we do this workshop again, we can get even more
practitioners on the ground, people who are not
necessarily AI experts, cuz I think that’s really
the place to start and yeah that’s what I was gonna
add to what you were saying.>>I will start with
the same complaint as him, I am also unhappy that
Mosan sat before me. [LAUGH] Yeah but
I actually worry a lot about how to detect AI Feeling,
right? I mean, so AI is a modest being
in the sense that it does not want to know what
it does not know. It does not even know
what it does not know. So, figuring out the unknowns,
when did the AI system fail? So if you are doing
MRI scan detection, so when it will not detect. So if you trust it
too much then what? So in winter for example, if I
could not connect people who have the some complain should
you be feeling isolated? Am I the only one who has this
complain no one has this or did the AI system failed? So and the other thing in
terms of AI in the field when it is deployed I worry a lot
about positive feedback, right? I mean, so if I am using
AI to collect labor data or to decide on what I do, now in the long term I will
be adding positive feedback. It’s like sometimes if you
are being taught by a teacher who only gives you
easy problems. You might have a false
sense of being a master of your field whereas
somebody who is confused and kind of messed up actually shows
you a more complete picture. Whereas AI will be solving in
some way the easy problems, right and then you might
think that you have reached a particular target but
actually you have not. So I am kind of worried
about the edge which is outside the reach of a AI. And also of course all
of this social goods and all it requires a very
close knit pipeline between the people who’ll be using it,
the developers who will actually be creating systems,
and maintaining it. In the long term, it’s not for
me like something that I do for six months and then I feel good
that I’ve done something useful. Unless I live for
at least five years, what’s the point of doing it,
right? So how will that happen, when we are trying to build
something for academia?>>So I would actually like
to echo what Mossam said because just with
a personal example. So at one point of time
about a couple of years back we developed an algorithm for
classifying. So there are two kinds
of social media posts situational which gives
situational information, and others like sentiment and all. So we designed a classifier to
distinguish between the two, and then summarized the situation
and information. And we compared with several
baselines using standard summarization evaluation
metrics. And we could beat
several baselines, and we got the paper published
at our conference. We are really happy, so then we showed the summaries to
actual NGOs and people who work in the disaster sites, they told
us this won’t be of any use. Why we are beating all of
the systems out there? They said that, you are summarizing
situational information what we want is
actionable information. So then we said what
is the difference? So then they explained to us and then we understood that
situational information might be 8,000 people have died
in the Nepal earthquake. This is situational information,
but you really can’t do anything
with this except feeling sad. The actual information is drinking water needed
in this village. So the problem was that later we
understood that in any kind of summarization process, you
assume that whatever occurs most frequently in the data is
the most important thing, but here it was completely opposite. The most important thing was something which is
very infrequent. So these kinds of things, the
domain experts need to tell us, the practitioners who
are working out on the site. So I agree that we need a lot of interaction with
the practitioners. And also another point is that
often, what our algorithms find out, we are not able to
judge whether it is good, things like in the medical
domain or legal domain. Even if the algorithm
gives us an output, we cannot judge
whether it is good. So again, we need to go back to
the practitioners to tell us what is good. And the third thing I would say, I really don’t know if this
solution exists there. There are some projects
where maybe the challenge is not computer science oriented or
maybe a deep learning method development is not
the main challenge. Here I find difficulties in
motivating computer science or technical students
to do that work. Because ultimately it
is a usable system which has to be developed. But then students at,
maybe technical institute, computer science students
are not that much always willing to
develop the system. They are more interested
in deep learning. So that is a problem, well maybe, some kind of
incentive mechanisms, maybe I don’t internships or
these kind of things. Some conferences are coming
up nowadays, but maybe some more incentives
are required to get good people to actually
do that ground work.>>Yeah Mossam I can see
you did a lot of thinking. [LAUGH] So I always used to wonder why can’t AI
machine learning have some of the properties that
database system research has? Again for example,
declarative semantics, can we have declarative semantics
associated with ML models. Which kind of say declare up
front this is what I am capable of doing, this is what I
am not capable of doing. So machines are excellent, AI
might be excellent at recall or I would say it’s much
better than humans at recall in many situations. But what about precision? So the capacity, the robustness
for a model to be able to come back and say I refrain
from giving an answer. Or the model to actually
approach the domain expert as a consultant and
engage in a dialogue. I don’t necessarily mean
dialogue systems here, but the learning process
itself being a dialogue. And I’ll go back to
what I said earlier, in case I didn’t
make myself clear. I think there’s a chance for
the human also to learn from the machine as it
provides insights into the data. And we have practically
seen labeling, or labeling any repository of B
to B reports, masters reports. I mean every time I have an email on a particular
topic I try to classify it. I leave as a T tag later on, I realize I would have actually
tagged it as something else. So a person even who is
knowledgeable in the domain might benefit from
a second chance. So can humans be given a second
chance courtesy the machine, and can machines refrain from
pretending to know more than what they know.>>So a couple of times CI was
mentioned in this session. And, so Mossam was mentioning how much of design goes
designing the interventions. Karim also mentioned, so one
question I have, it’s a broader question that I would like to
draw from your experiences. In the end goal or
the improvement that you get. How much do you think is due
to the design process and interfaces that people use
versus the optimization that you do on top of those systems? Or is it just a mixed
intervention where it’s hard to tell what’s
the main driver?>>My reaction to a question is
I always feel that at least in crowd sourcing platforms or in
general in any platforms where you are building user
facing applications, interface is more important
than anything else. Of course functionality if
you have good interface, but you can’t do anything then
that’s obviously useless. But if you have to make a choice
between better interface and slightly more optimization, I would always first go
with better interface. And then optimization later, so
that’s why most of the work that we have done in crowdsourcing
Most of the interesting work. I mean there’s always
some binary choice, multiple choice kind of work. There was a question
about that too. There we don’t need that
much of user interface but even there clear instructions
have a huge role first. Sometimes you start with
confusing instructions and that completely
confuses the workers. But in more complex tasks,
it is always the experienced researcher first who
designs the work flow, and after their design they show
that it can be done and then we take it and make it
work better, or cheaper, or higher quality but the can be
done aspect usually done in crowdsourcing has
come from HCI for me.>>I would like to add that it’s
not like it’s HCI, there can be AI in designing human
interfaces not in the form for superficial interfaces but I
just say that if you are trying to get data label,
I feel putting one particular instance in
the context of other examples. And it also goes back to that
point that Ganesh made about re-labeling you
might want to visit. Sometimes because when you’re
just show instance at a time, you cannot provide as high
quality labels when you’re shown in context of other examples
like which are closed by, or which you think might
be confusing but I’ve done a bit of work
on active learning. And I feel it’s very important
to show examples which are on the boundary which you think
you might be straddling through classes and if you show sort of
here are the set of examples, now label all of them jointly. This is an AI problem but
its also an ATA problem. So that’s where I think
research is required when we look at the problem.>>Just one quick thought, I think its useful to ask who
is going to make use of the AI. You look at the company
like google on Facebook and Microsoft, the impacts
today have been massive. It’s just totally taking over
their organization and part of that is because they can hire
extremely highly skilled people. They have lots of
money to be able to sort of bring in human capacity. But if you think about. I mean the context of this
workshop is social good. And so we’re talking about
a different set of organizations that often don’t
have as much money. So NGOs often don’t have
as much money for example. So in a situation like
that where you have less human capacity,
I would argue that the design, the human interface matters more
cause you need to overcome that limitation of less capacity. That’s all I think.>>I would slightly differ
with you because social group doesn’t mean it has to
be topic all the time.>>Okay.
>>Having said that I think in my opinion first of
all great surgeons. I actually come from a very
different background, I actually come from Microsoft
worldwide commercial business, so different perspective. But otherwise if I look at AI
should be complementing what and where human intelligence
actually may be limited. But it should not be
something where human common sense is
actually failing. So for example,
a complaint management system, it’s probably to
do with failure of human common sense than actually
something to do with now.>>If there was a failure
in human common sense.>>That’s what I
was trying to say. Having said that in India
we have around 3.4 million registered NGOs. If they can’t fix a problem, I don’t think the AI can
ever fix the problem.>>Exactly.>>That’s a fair point to make,
having said that I think we should look at AI
as probably more complimenting where human intelligence
is actually limited. Again, it should not be
only just for social good. It doesn’t mean that it
has to be not-for-profit. You can always have something
which is for social good yet it is a profitable
venture as well.>>Go ahead.>>I have a quick response. I mean human is not
one individual. When we say human intelligence
is failing, what do we mean? Are we saying Intelligence is
filling an average person’s chess playing ability
is filling, or in this particular case,
do people know to fill form? Yes. Are they inclined to fill forms? No. Is everybody equally
good at filling forms no. So therefore when we say we have
to deal with systems, we have to deal with all kinds of humans,
also the people who don’t know how to fill forms, and even
their problems are as important. So I personally
disagree with you here. I believe that yes,
machine intelligence must be complimenting human intelligence
as much as possible. But, at the same time,
there are situations where some other human could
have done a better job or the expert human
is knowledgeable. And I think those are very
valid scenarios for AI systems to come in. For example, if you can
create automated features which we’ll teach in our
villages, I will be very happy. I mean,
it’s a philosophical debate.>>[INAUDIBLE]>>[LAUGH]>>I mean, it’s a very interesting [CROSSTALK]
>>[LAUGH]>>[LAUGH]>>This is AI and social good and the social good part is not
only taken care of by NGOs. I think we need to
think of government, obviously, as an important
part of that. But when it comes to government
or anything other than, you were talking about what other AI
fields at the boundaries, etc. I think from the other side,
you tend to see of it as equity being part of something that
you need to adhere to for government, that all citizens
need to be taken care of. And for that I think we need to
have conversations with also in a forum like this but
potentially what we are missing are the messiers and doom seers
about all of this is going to go, the people who don’t
like [INAUDIBLE] for x, y ,and z reason because it is at
the intersection of these sort of conflicting opinions that
we will have solutions to sort of problems in a way that
addresses the current situation. So the reason I bring that
up is because if I were to listen to the conversations and
If I was in government and you came with that solution for
disasters, I’ll say no. I don’t want that because
I’m worried about bias. Who are the people who
will have Twitter, who will have social media which
I don’t know answers to because I could be accused of only
the rich people that I’m doing. But how do we ask those kind
of questions in the research? That all of the amazing work
that all of you are doing, I think probably in this group,
we need to have a little bit more of people who don’t want
this kind of work happening also in this room, to make us refine
ideas a little bit better>>The topic of NGOs and governments has come up
often as potential partners. When I started this,
my work in the social enterprise sector we’re trying
to help with neonatal mortality. And this is 2010 when I looked
at the data we had a lot of home deliveries,
institutional deliveries had oficially dove cross
the 50% mark it was at 60%. If you look at, when we look at just a baby
is born in institutions. In the country, it was 2%
who were born in NGO managed hospitals, about 60%
were in government, 40% were in private facilities. The reason I quote that
statistic is when we talk about NGOs and governments,
we are missing large segments of the population that participate
in market economies but are struggling. Okay, but are not necessarily
destitute and therefore not necessarily addressed as
truly bottom of the pyramid. And these may be low
hanging fruit who may not be accessible through
government and NGOs, but through private
sector organizations. May not be organized
private sector, but there is large pools
of possibility there.>>Any other questions? Please.>>So I have a clarification too, so
when you say not the bottom of the pyramid, but yet struggling
enterprises, do you mean for example the informal sector? The [INAUDIBLE] dispensaries.>>Yes, dispensaries
[INAUDIBLE] take your needs, many of them due to government
incentives will go deliver in a government setting. When the baby gets sick, they will not go to a government
facility, for example. When they need surgery, they will not go to
a government facility. Your auto drivers, mini cab
drivers, right, the urban poor is a large segment, by the way,
who often do [INAUDIBLE] and very different characteristics
from rural poor.>>Yeah, this question is for
Mr. Hangel. I just wanted to understand the
work what you did in hopefully they are in terms of being the
transparency in the civil work activities of the local
administration. How was the local
administration actually like, took a time and how did they
include that in their work for actually bringing in
the smart cities and [INAUDIBLE] in that
particular vision? Because what I understand is
that some of the smart cities actual locations before
they came up with RFB. They did a lot of actually
extensive workshops with RFP stakeholders and took the
feedback from them in terms of actually like bringing out
the very unique small city propulsion for
that particular vision. So in your [INAUDIBLE] what
was the outcome and how it was actually taken, followed
by the local police station.>>So I think that a lot of
things that are wrong about the smart city mission, but one of the things that actually
has worked is the citizen participation to
a relative degree. Partly because there was
a point system and so the city, if nothing else, had to show
that they had consulted one lag people or two lag people or
whatever it is. So there was a genuine
commitment to doing whatever it takes to reach a lot of people,
that’s number one. The second thing is that
what is a smart city is very much left to
the particular city, at least in
the Indian Smart Cities program. So cities make whatever
they want of it, and I think that is the way
the government wants it also. So it’s really up to us or up to the citizens to make
what they want of it. So in our case we have said that
open data should be a big part of smart city, I don’t think many other
cities have done that. But once we go to them with that
platform, I mean, it’s hard for them to say no for one thing, because it’s kind
of a common sensical thing. The other thing is to project
the tools that we are building as tools that will help them,
not the public necessarily, even though that is also
part of the agenda. So for example, the system I
showed where we can crowdsource, what’s really happening on
the ground in terms of works, is a big bull to the
commissioner, because he doesn’t have to travel now to all his
70 wards or whatever it is. He can now sit in his office and look at a bunch of
works from his office. So packaging it in a way that
benefits both the government as well as the public and not
necessarily going head to head. I’m sure if this system gets
really popular [INAUDIBLE] backlash [INAUDIBLE] but so far,
we haven’t got to that point. So packaging it in a way that
really brings out benefit for the administration as well. I think it’s really important, packaging it in a way that
that’s saying you don’t have to pay [INAUDIBLE] to somebody to
develop [INAUDIBLE] software [INAUDIBLE] open source
platform for you for free. That helps a lot, so
things like that.>>Thanks, thanks a lot.>>So at this point this is
a really heart provoking set of talks, and
really provocative questions. There’s lot of questions
I have personally, but we have to end this at this
time, so that we can move on. Thank you very much,
for excellent talks. Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *