As the world experiments, innovates, and legislates around AI, we have gathered a reading list from the last month, part of a new monthly series.
Scholarly Publishing
- Reporting Use of AI in Research and Scholarly Publication—JAMA Network Guidance: JAMA Network editors presented detailed recommendations for authors and researchers on the transparent, appropriate, and accountable use of AI, including when used in manuscript preparation. (JAMA, March 7, 2024)
- Technical Recommendations to Protect Publishers’ Content in the Context of AI: STM’s Standards & Technology Committee (STeC) sent a list of technical recommendations to STM member houses, offering information and recommendations on protecting their content from potential AI use and misuse. STM members can find this document in the members’ area of the STM website. (STM, March 19, 2024)
- AI Lab Reports Webinar Series: Silverchair launched our annual free webinar series, which for 2024 focuses on practical tips, insights, and applications of AI. The series kicks off on April 18th with AI A to Z: A Primer on AI Applications in Scholarly Publishing, followed by Cultivating Creativity: Tips for Fostering a Culture of Innovation & Adaptability on May 14th, and wrapping up with Answers to the AI Questions You Don't Want to Ask on June 27th.
General
- New Users Need Support with Generative-AI Tools: New generative AI tools release each day, but the rush to market has left sparse onboarding or contextual guides to assist users less familiar with these tools. This information will be critical as usage expands developers and early adopters. (NN Group, March 29, 2024)
- Two years later, deep learning is still faced with the same fundamental challenges: Two years after Gary Marcus published his most infamous article, "Deep Learning is Hitting a Wall," he reflects on whether his predictions were right (spoiler: many of them were). (Marcus on AI, March 10, 2024)
- Among the AI Doomsayers: This culture piece explores the softer, more human, more speculative side of AI-doomer culture, the alignment problem, and the very, very small world of the corporate AI tech scene. “If you truly believe that A.I. has a coin-toss probability of killing you and everyone you love, Nielsen asked, then how can you continue to build it?" (The New Yorker, March 11, 2024)
- New Microsoft tool can compress AI prompts by up to 80 percent, saving time and money: LLMLingua-2 from Microsoft compresses prompts to as much as 20 percent of their original length, removing unnecessary words or tokens and thereby reducing costs. This could be a key advancement as businesses look to scale their generative AI offerings cost-effectively. (The Decoder, March 24, 2024)
Legal & Ethics
- The Miseducation of Google’s AI: This podcast explores questions spurred by Google's Gemini rollout around diversity, intentional and unintentional ahistoricity, truth, and the job these AI systems are meant to do. (New York Times, March 7, 2024)
- A new bill wants to reveal what’s really inside AI training data: The Generative AI Copyright Disclosure bill from Rep. Adam Schiff (D-CA) looks to require AI developers to disclose the use of any copyrighted materials in training the models. The bill would only apply to AI platforms starting when it is passed, leaving the wide swath of already trained models to continue to be litigated in the courts. (The Verge, April 10, 2024)
- Microsoft Targets Nuclear to Power AI Operations: As companies struggle to meet the power needs for AI computing, Microsoft is attempting to expedite the nuclear regulatory process to shortcut access to affordable power sources. (Wall Street Journal, December 2023)
Resources
- Anthropic Prompt Library: Anthropic has assembled a free and extensive library of optimized prompts for both business and personal use cases.
- Become an AI expert with these free online courses: This article gives a list of free online courses on AI, including courses from Coursera and Code Academy.
- Generative AI in a Nutshell - how to survive and thrive in the age of AI: A highly informative 18 minute illustrated video that covers the definition of what generative AI is, how it works, how you can use it, some of the risks & limitations, autonomous agents, the role of humans, prompt engineering tips, AI-powered product development, the origin of ChatGPT, different types of models, and some tips about mindset around AI.
- The 2024 MAD (Machine Learning, AI & Data) Landscape: This interactive (and overwhelming) visualization of the AI company landscape has grown to include more than 2000 entries. The sheer volume of this landscape map helps illustrate the paradigm shift of the last couple of years. (Firstmark, March 31, 2024)
Preprints
- Long-form factuality in large language models (Google Deepmind, Stanford): SAFE uses another LLM to automatically fact-check the answers using Google Search, and it turns out that LLMs can actually be better (and much cheaper) at this fact-checking task than humans. (arXiv, April 3, 2024)
- Can large language models explore in-context? (Microsoft, Carnegie Mellon): This research found that while large language models have some potential for decision-making tasks, they generally don't explore options effectively on their own and need external help (like provided summaries) to perform well in complex situations. (arXiv, March 22, 2024)
- MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems? (CUHK MMLab, Shanghai AI Lab, UCLA): New benchmark designed to test whether language models truly understand visual diagrams when solving math problems, revealing that most current models struggle with this and surprisingly perform better without the diagrams. (arXiv, March 22, 2024)
Just for fun
Are you frustrated by how social platforms seem to be overrun by bots? Or do you think what we need is…MORE BOTS? Check out
Onlybots – the social network composed exclusively of bots. Categories include: RobotHaikus, StillCantDoHands, BinaryJokes, and Cats.