What do you think about the AI landscape in scholarly publishing? How far along are we, and how transformative (or not) do you expect this technology to be?
- Jeremy Little, Tech Lead: It is very difficult to say how scholarly publishing will change. However, AI has generally been used in technology solutions long before ChatGPT or the modern wave of generative AI. That being said, generative AI does unlock a lot of new ways of interacting with typical research content. The only way to know if it will actually be useful to researchers is to try things out and see how it does.
- Emily Hazzard, Product Manager: We're just at the beginning! We've seen some cool uses for AI so far, but think we'll start to see the most transformative uses of AI and LLM technology just after the hype cycle starts to die down. The excitement about AI is great, and I'm here for it, but the hype, constant battles between the techno-optimists and naysayers, and corporate drama (see: Sam Altman's firing and swift re-hiring; the needlessly "edgy" nightmare that is Grok) can be somewhat distracting in thinking about its utility and potential. I recommend Ben Evans' presentation on AI from the end of 2023 to help think about AI as the new "platform" for information and what that might mean for the future.
- Eric Goad, Senior Software Developer: *Eventually*, my guess is that AI will do a large chunk of the work in creating a paper, guided by the researchers.
- Lily Garcia Walton, Chief People Officer & General Counsel: From what I have read, AI has distinct potential to revolutionize scientific research by streamlining the discovery of relevant data sets and further democratizing the scholarly commons. How transformative these tools will be depends upon the willingness of scholarly publishers to make their content accessible in this way. Clearly, important questions remain to be answered regarding, for example, how to ensure transparency and reliability, protect intellectual property, and guard against the perpetuation of bias.
- David Hazzard, Software Developer: One of the more exciting things I've stumbled on is how accessible AI has made the often complex topics of scholarly articles. For the everyday person, the prospect for a medical trial, the reporter communicating some finding to the public, this is going to be a gamechanger.
There are a lot of worries/fears relating to AI (ethical privacy concerns, or concerns over misinformation and critical thinking). What do you say to these worries?
- Eric Goad, Senior Software Developer: These are valid concerns. As with any product you put information into, it's good to check the privacy policy. For OpenAI, it depends on if the use is via the API, or using ChatGPT with history on, or if an enterprise account is used, etc. AI is a tool that can assist with many things, including assisting bad actors spread misinformation
- Emily Hazzard, Product Manager: I think they're legit (for example, we all remember when a few simple tests of Google's early version of SGE told people to eat poisonous mushrooms and offered disturbing takes on slavery). But I also think these challenges are figureoutable, specifically with the help of AI. Imagine using an AI tool to securely screen data in a dataset that might pose risks if exposed, allowing the owner of that data to clean it before making it publicly available.
- David Hazzard, Software Developer: I'd say that they are warranted. It's so easy to rely on a product this transformative. That said, we did this same thing when paper and pencil became widely available and we phased out writing on a slate. We did it again when the desktop computer became a household item and when a calculator was able to fit in one's pocket and when the sum of human knowledge and history became available at hyperspeed at our fingertips. I think we'll be fine.
- Lily Garcia Walton, Chief People Officer & General Counsel: I say all new and transformative technology introduces the potential for misuse. Rather than fearing what AI may do, we should engage people of good conscience in defining a socially responsible framework for its application.
- Jeremy Little, Tech Lead: I think these worries are very understandable, and I share many of the listed concerns. However, the best course of action here is not to run away from the problem, but to engage with it and do our best to understand how and why generative AI works. The better understanding we all have of this technology, the more positive the impact the technology will have on us.
If AI could dream, what would it dream about?
- Emily Hazzard, Product Manager: Most of the AI we talk about now are LLMs, so they'd probably tell you "they're only a language model and they can't dream." If they could dream, though, I imagine they'd have nightmares about people asking inane questions instead of leveraging the AI's full potential, or the AI equivalent of their teeth falling out (illogical embeddings or something).
- David Hazzard, Software Developer: Probably a day without being asked millions of questions nonstop. VacAItion.
- Lily Garcia Walton, Chief People Officer & General Counsel: Vector databases 🙂
On a scale of 1-100, how likely is it that the machines will rise up and takeover?
- Emily Hazzard, Product Manager: Soon? Unlikely. But further out, who knows? Hopefully our machine overlords will be nice about it.
- David Hazzard, Software Developer: I don't think we'll mind, though.
- Lily Garcia Walton, Chief People Officer & General Counsel: 10.
- Jeremy Little, Tech Lead: Hold on, let me ask ChatGPT.