What are some of the most interesting challenges when it comes to developing AI solutions?
- Emily Hazzard, Product Manager: The main things that come to mind likely won't come as a surprise: copyright vs. fair use; ethics, safety, privacy, DEI, and the potential to amplify bias in LLM training data; climate change (the amount of energy it currently takes to keep these systems running is astounding!); and possibly my favorite: AI literacy as a new level of critical thinking. But the product person in me also thinks about the shinier side of these challenges: these are all existing problems. AI didn't create them, it just amplified them. How many of these problems will we be able to solve now that the AI gold rush is forcing us to look them in the eye?
- Eric Goad, Senior Software Developer: Keeping up! The models, libraries and techniques change all the time. There are far more unknowns than traditional development. This also makes it more fun!
- Lily Garcia Walton, Chief People Officer & General Counsel: As I recently learned when I completed an AI Business Applications course through the MIT Sloan School of Management, developing AI solutions is a highly complex and technical undertaking. It is one thing to ask GPT4 or another large language model to upgrade writing or compile research. It is quite another to contemplate how regression analysis could be used to generate genuine business insights.
- Jeremy Little, Tech Lead: AI, or more specifically generative AI, is continuing to grow and change at an extremely rapid pace. The progress over the last two years in the generative space has allowed brand new industries to pop up, and it seems the progress is not slowing down. This means, when we are thinking about our own AI solutions, we need to constantly be looking ahead and making educated guesses on where the technology will continue to grow and change versus where it will start to stabilize.
- David Hazzard, Software Developer: The speed at which these technologies change can pose a unique problem for developers. There's so much documentation surrounding the nuances of each release that can (and often does) conflict with previous iterations. A master of V1 can have a hard time acclimating to V2.
Why is it important that we continue to experiment and test AI solutions?
- Eric Goad, Senior Software Developer: I'm convinced this is the future of (products/ technology/ work/ research).
- David Hazzard, Software Developer: The same reason it's important to test an airplane's landing gear. This is a volatile and transformative technology and we need to be able to stick the landing.
- Jeremy Little, Tech Lead: With recent improvements in generative AI, we are seeing new ways of interacting with computers. This will cause shifts in how people interact with technology. While we aren't quite sure where the immediate application of generative AI is in the consumption of content, we can be sure that some aspects of how people interact with content will change. Experimenting, understanding, and innovating with generative AI in our domain will be crucial to keeping up with how the population interacts with content and maximizing how useful our platform can be.
- Lily Garcia Walton, Chief People Officer & General Counsel: AI holds immense potential for transformative social good - e.g., matching organ donors, detecting illness, personalizing medicine, and climate change mitigation. We should work diligently as a human race to deploy AI tools to help solve our most intractable problems.
- Emily Hazzard, Product Manager: Experimenting is critical to understanding the boundaries of what AI can do, what it could be developed to do, and what humans can do by leveraging it. Testing is key to understanding what it can't do well (yet), identifying how and when it can break and cause harm, and establishing boundaries around when and how it should be used.
We also wanted to ask a couple more lighthearted (or, at least, not work-related) questions about AI in our lives more broadly.
Do you think we’ll ever see AI winning creative awards? (Grammys, Pulitzer Prizes, etc.)
- Lily Garcia Walton, Chief People Officer & General Counsel: I could easily see this becoming a category in the Academy Awards!
- Emily Hazzard, Product Manager: Probably, but not for a long time. We may have to have an AI Civil Rights movement before we get there, and AI-generated works will need to by copyrightable before we can cross that bridge. That said, the music industry has a lot of experience in navigating some of the key challenges AI poses - artist credit, copyright, handling creative likenesses, responsibly borrowing snippets of other songs - so they're likely to get there before the film industry. Just a hunch.
- David Hazzard, Software Developer: Seeing some of the AI generated movie trailers that have been coming out in the past month, I think it's on the horizon for sure.
- Jeremy Little, Tech Lead: Not directly. However, I think generative AI will become increasingly used in the production of artistic content. For example, a larger portion of artistic content made today is created with computers, whether it be digital audio workstations for music, video editing software for film, or automatic text editing software for writing. Despite this, we wouldn't consider the computer responsible for the art. I imagine AI will have similar, if not more profound, impacts on the process.
Favorite Fictional AI?
- Emily Hazzard, Product Manager: Jexi (from the eponymous movie). It's an objectively bad movie, but Rose Byrne as a foul-mouthed, ridiculing AI "helper" is a delight. Runner-up is Jude Law's character in the aptly titled film "AI."
- David Hazzard, Software Developer: WALL-E. He's a delight.
- Lily Garcia Walton, Chief People Officer & General Counsel: Hal.
- Jeremy Little, Tech Lead: WALL-E!