CHIFOO Event Recap: Ethical Dilemmas and Democratized Proliferation of AI (Retroactively "Adventures of an Introvert in the Wild Event Recap No. 1")
tl;dr: Artificial Intelligence (AI) development is advancing rapidly and society must adapt its educational and social welfare safety net systems in order to ensure the most utilitarian dissemination of benefits. We have to intentionally create systems that work to decentralize the wealth and benefits that technological advancement has created and will continue to create.
This was the first CHIFOO event I have been to and I quite enjoyed it. The talk, entitled “Ethical Dilemmas and the Democratization of AI” was presented by Tapabrata “Tapa” Gosh (@semiDL on Twitter), a 17 year old Thiel Fellow (aka one smart dude). I thought he did a good job keeping the presentation a reasonable length — 40 minutes or so — which is about as long as I can realistically stay engaged during a talk, and it allowed enough time for questions at the end.
Though the title of the talk implied a heavy focus on the ethical dilemmas of AI, for better or worse, Tapa mostly focused on recent AI advancements and where he sees AI going in the future. The dilemmas he did focus on were the economic implications of AI, like the automation of white-collar jobs — he used radiology and dermatology as examples — and the possibility of greater rates of unemployment because of this.
Interestingly, he does not see the potential increase of unemployment as an inherently bad thing. He concedes that if we do not plan for this as a society and modify our social welfare systems to account for this, it could be detrimental, but he doesn’t think it has to be. He echoed many tech luminaries by citing the idea of a “universal basic income” (UBI) as a potential part of the solution, however, he did not delve into the ethical implications of this.
Some questions another audience member and I had were:
Would people receiving the UBI be relegated to specific housing while those who are able to work get to live in better conditions?
In a world where data is increasingly valuable, would they be required to provide their health information for epidemiological studies in order to receive the money?
While questions like these are what I’m particularly drawn to and why I think human-centered design is so important, I don’t fault him for not going deeper. The ethical implications of AI could be (and probably already is) an entire college level course. Not something you cover in a 60 minute talk.
I left the talk feeling optimistic but apprehensive about the future of AI. Pretty much how I always feel when thinking about AI at any length.
While I think it’s awesome that machine learning has already created computer vision applications that can diagnose skin rashes and radiology scans as effectively as doctors, the increased centralization of wealth that could occur from “automating” previously prestigious jobs, such as those in medicine, makes me nervous. However, on the upside, applications of AI like this could free up more time for doctors to spend time with patients. How it shakes out will largely depend on how we modify societal systems to account for these changes.
While society needs the software developers and AI researchers to keep advancing the field, these social issues are not in their wheelhouse. Arguably more important for the times to come are the politicians and community leaders who will have to deal with the wide-scale social changes (and possible fallout).
What do you do as a society when previously well-established systems of advancement, such as college, destabilize? The return on investment of a college degree is already a pittance compared to what it used to be, and personally, I don’t expect it to get better*. How, then, does society iterate to provide other means of advancement? Not doing so would lead to an even increased centralization of wealth compared to what the United States is currently experiencing.
What do you do in the case of a Universal Basic Income to maintain the psychological wellbeing of people who no longer “have” to work, but live in a society that still maintains a “pull yourself up by your bootstraps” mentality**?
I think humans are resilient and clever and for this alone I maintain optimism that AI won’t cause our demise. However, I anticipate many bumps in the road. As far as I’m concerned, we already are struggling and will continue to struggle before we figure out sustainable solutions to technological advancement. That’s why I think organizations like CHIFOO are important and why I believe an increased focus on human-centered design is integral to the symbiotic development of technology and society.
****If I got anything wrong in what I recounted about the talk, please let me know and I will make the appropriate modifications. Though this is a casual blog post, I always want to maintain journalistic integrity.****
* My guess is that the system as it currently stands will “break” and four year liberal arts degrees will go back to being an aristocratic luxury while online certificates and degrees and bootcamps will become more predominant and valued.
** In the case of widespread unemployment, it’s likely this American mindset would eventually change. However, cultural beliefs don’t change as fast as technology, which is why I think many people would be left to cope with the dissonance of “government-approved” unemployment and an antiquated cultural mindset that doesn’t.