As the world experiments, innovates, and legislates around AI, we have gathered a reading list from the last month, part of a new monthly series. (Read the last issue here.)
Scholarly Publishing
Silverchair’s AI Lab Launches Prototypes to Transform Science Communication: We’re biased, but we think this announcement is pretty exciting. The AI Lab is a flexible space where we can bring prototypes to our clients quickly, testing their effectiveness and ensuring they are helping publishers achieve their goals, from improving user experience to protecting the value of content in an evolving knowledge ecosystem. The first outputs of the AI Lab include a content discovery and recommendation tool that creates new ways for users to interact with content using RAG frameworks; SilverChat, which acts as a personal Silverchair Platform expert for clients; and AI-generated summaries that make research findings more accessible by generating plain-language explanations of scholarly journal articles. (Silverchair, January 29, 2024)
Researchers plan to release guidelines for the use of AI in publishing: In 2023, journals and publishers scrambled to release guidelines for how AI could and could not be used in publishing workflows. Now, to standardize (and simplify) those guidelines, a group of researchers aim to find consensus around a shred set of rules for the use of AI. (Chemical & Engineering News, January 19, 2024)
Accelerating AI Skills: Preparing the Workforce for Jobs of the Future: Amazon Web Services released a report that contains insightful takeaways for those working in technology: “Surveyed employees anticipate AI will have some positive impact on their career (84%). Moreover, nearly eight in 10 workers (79%) are interested in developing AI skills to advance their careers. The top three reasons employees cited a desire to learn AI skills are: improved job efficiency (51%), higher salary (44%), and faster career progression (42%). Employers indicate they would pay a salary premium for workers with AI skills. This wage premium could be at least 30% and varies by department.” (Amazon Web Services, November 2023)
Legal & Ethical
Anthropic researchers find that AI models can be trained to deceive: Anthropic researchers have found that the most commonly used AI safety techniques have little to no effect on the models’ deceptive behaviors. One technique called adversarial training even taught the models to conceal their deception. It's not currently clear whether the deceptive behavior can be cultivated in the wild (i.e., without explicit training on deception). (Tech Crunch, January 13, 2024)
What was Sora trained on? Creatives demand answers: OpenAI hasn't said where their training data for Sora came from, leading many people to speculate whether the training data included copyright content (likely) and whether artists and creatives have any rights or recourse: "...publicly-available doesn't always translate to public domain." (Mashable, February 16, 2024)
Why The New York Times might win its copyright lawsuit against OpenAI: This article, written by a journalist and a lawyer, warns AI companies about the potential perils of copyright infringement by comparing the OpenAI's fight against the NYT's copyright infringement case to MP3.com's fight against the recording industry back in the early 2000s. They argue that fair use isn't designed to scale, diving into detail on what companies have to consider in potential fair use cases and outlining specific challenges that these companies will have to hurdle. (Ars Technica, February 20, 2024)
Technology
OpenAI announces team to build ‘crowdsourced’ governance ideas into its models: OpenAI is recruiting researchers and engineers to join the new Collective Alignment team, hoping this will help create consensus around governance needs in AI models and simultaneously represent the diversity of opinions on AI. (Tech Crunch, January 16, 2024)
Google AI has better bedside manner than human doctors — and makes better diagnoses: The Articulate Medical Intelligence Explorer (AMIE) is an experimental chatbot that matches and in some cases surpasses human doctors' ability to talk to simulated patients and come up with potential diagnoses based on their medical history. (Nature, January 12, 2024)
…and just plain weird
Amazon Is Selling Products With AI-Generated Names Like "I Cannot Fulfill This Request It Goes Against OpenAI Use Policy": The title really says it all, but the quality of Amazon's marketplace (which has already been declining in recent years due to a flood of mislabeled or intentionally misleading product listings) appears to continue a downward trend despite the potential for generative AI to make listings in Amazon's marketplace more effective and truthful. (Futurism, January 12, 2024)