• Teaching artificial intelligence and humanity, February 2018, par Jennifer Keating and Illah Nourbakhsh
    https://cacm.acm.org/magazines/2018/2/224630-teaching-artificial-intelligence-and-humanity/fulltext

    Deux lignes de réflexion M,ont intrigués à la lecture de cet article :
    - enseigner les humanities dans le cursus général de l’enseignement informatique et plus précisément l’IA.
    - la maladresse (inconscience ?) de se permettre de comparer l’abolition de l’esclavage et la libération de l’IA

    La première :

    In a time of accelerating technological disruption, the next generation of leaders and innovators are ill-equipped to navigate this boundary chapter in human-machine relationships. Perhaps our students can learn from how humans have treated humans to determine viable roadmaps for this challenging moment in our economic, social, and political history, as we mindfully navigate human-machine interactions.

    Our society is locked in a stance of both anxiety and ambition in regard to the future of AI. We believe it is crucial that students embarking on undergraduate studies, as budding technologists, writers, policymakers, and a myriad of other future leadership roles, should be better equipped and better practiced in engaging these difficult questions.

    Notably, corporations were historically granted limited personhood to shield individual humans from responsibility and blame; personhood ascribed to AI similarly shields both corporations and engineers. Personification trades accountability with convenience for tort and liability, and further with product marketing.

    Rapid progress in AI/machine learning and its central role in our social, economic, and political culture signals its salience to the next generation of students entering universities. Building next-generation AI is currently a hot topic. At Carnegie Mellon, we have no trouble filling such classes. And yet, a nuanced understanding of the contributions that technologists are currently making to the world, an indication of how the next generation of computer scientists, engineers, and roboticists might shape the world that humanists and social scientists study, is not at the forefront of our undergraduates’ minds. So, how might we ensure this is something they consider throughout their undergraduate career? And that, instead, societal consideration shapes their undergraduate studies from their first year onward?
    [...]
    [The students] will be taught each class by a team of faculty with an intertwined pedagogical approach: a roboticist and a humanist.

    Le deuxième point me questionne sur la pertinence de partager cet article. Cet extrait par exemple, me donne l’impression que les auteurs envisagent l’hypothèse que les esclaves ne faisaient pas partie de l’humanité, qu’ils leurs fallait savoir lire (et écrire) pour y accéder.

    Reading, however, marks agency for Douglass. Reading, in the context of an AI system, suggests anthropomorphic undertones and perhaps humanity.

    Les auteurs tentent de se dédouaner d’une telle vision :

    In the context of Douglass’ narrative, the prospect of literacy suggests the slave as worth more than his or her labor. Instead, the slave’s capacity to learn, to engage in civilized discourse by joining what Benedict Anderson calls “an imagined community,” suggests his equal position with other members of the society in contrast to the juridical category of slave as property.

    [Douglass is the author of Narrative of the Life of Frederick Douglass, an American Slave (1845) and former slave, dont les essais ont largement participé à l’abolition de l’esclavage.]

    Mais tout de même.. une maxime bouddhiste (pour ce qu’elle doit avoir de philosophique) me hante toujours et me titille à la lecture de cet article.

    Les comparaisons sont odieuses.

    [edit : ajout de « les auteurs envisagent l’hypothèse »]