- #1
- 2,589
- 2,044
- TL;DR Summary
- A recent paper discusses the "Ethical and social risks of harm from
Language Models"
I thought this paper might be of interest to those who are interested in how large language models such as GPT-3 pose various hazards.
From the abstract:
It is quite long but particularly attractive because it gives advice on how to read it for those who can only afford a minute or 10 minutes to read to experts and non-experts who want to go deeper..
https://arxiv.org/pdf/2112.04359.pdf
From the abstract:
This paper aims to help structure the risk landscape associated with large-scale Langauge Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary literature from computer science, linguistics, and social sciences. The paper outlines six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms.
It is quite long but particularly attractive because it gives advice on how to read it for those who can only afford a minute or 10 minutes to read to experts and non-experts who want to go deeper..
https://arxiv.org/pdf/2112.04359.pdf