Content
newsletter Newsletter

With a continued focus on AI ethics and regulation, businesses need real-time solutions for more transparency and accountability into how the technology creates its outputs. Concerns about bias, fairness, and safety are top of mind, especially as AI becomes more integrated into our daily lives.

But as with any technology, AI carries unknowns. One of the important topics facing AI today is the need for more transparency and accountability in how the technology creates its outputs. Concerns about bias, fairness, and safety are top of mind, especially as AI becomes more integrated into our daily lives.

To address these concerns, a new approach to AI is emerging, one that combines the best of both worlds: the power and scalability of large language models (LLMs) and the precision and explainability of traditional machine learning techniques. Hybrid technologies—with humans closely involved in the creation—have the potential to revolutionize the way we innovate, ushering in more responsible and user-centric AI tools than ever before.

At the heart of this method is the idea of leveraging the strengths of LLMs in collaboration with traditional machine learning to create more robust and reliable AI systems. Large language models, such as GPT-4, leverage some of the most advanced deep learning algorithms ever created, capable of understanding written language and generating human-like text. Traditional machine learning involves training models on large datasets and using statistical methods to make predictions, offering interpretability, scalability, and robustness in handling data

By combining or layering these approaches, AI product builders can create systems that are both useful and transparent. Large language models provide the scalability and flexibility needed to process massive amounts of data and generate human-like text. At the same time, traditional machine learning techniques provide opportunities for fairness and transparency through interpretable models, explainable feature importance, and the ability to mitigate bias through careful feature engineering, algorithm design, and data annotation.

And while LLMs may eventually fix bias and fairness issues through real-world testing, it will take time to get it right. For scaled user-facing products that reach millions of people, a hybrid approach allows for quick fixes within hours or days, prioritizing user trust.

Enhancing GPT-generated Text with Hybrid AI Systems

At Grammarly, we combine multiple technologies to work together and create more reliable and contextually-relevant results. We conducted research when we introduced LLMs into our proprietary technology to validate our hypothesis that layered techniques lead to better overall outcomes. To do this, we quantitatively evaluated the contribution of each technology in the system.

By design, GPT-generated text should be “largely error-free” (Thomas Hügle, 2023). Quantitative experimentation confirms that egregious grammatical errors rarely happen in outputs.

Our findings were consistent with this: text created by generative AI produces relatively few issues related to grammar or spelling—as expected. However, when subsequently running the text through Grammarly’s proprietary AI and machine learning systems, our research signaled that additional stylistic and safety improvements could be made (for things like inclusive language, passive voice, and tone). In essence, hybrid AI presents an opportunity to improve the overall quality of GPT outputs.

While deeper analysis is needed with an eye on the AI writing suggestions that may not be accurate—or false positives and negatives—the abundance of stylistic issues generated corresponds with quality concerns about GPT-generated text.

Building Responsible and User-Centric AI Requires Humans-in-the-loop

But it's not just about the technology—it's about the people behind it. Responsible AI is a cross-functional effort that involves cross-functional teams from data scientists, linguists, and engineers to product managers and designers.

At the heart of this effort is the need to evaluate and mitigate bias and improve fairness, both in the data used to train AI models and in the design of the AI systems themselves. This requires a deep understanding of the ethical implications of AI and a commitment to building technologies that are safe, reliable, and user-centric. It's important to involve internal experts and research teams in every step of AI product development to ensure a responsible, human-in-the-loop approach that considers users' and society's best interests.

Companies across industries are beginning to embrace this hybrid approach to AI. From healthcare to finance to transportation, organizations recognize the potential of this approach to create more responsible and user-centric technologies. And as more companies embrace this approach, we can expect to see even greater innovation and progress in the field of AI.

As we look to the future of AI, one thing is clear: a hybrid approach that combines the power of LLMs with the precision of other machine learning approaches is the most responsible and user-centric way to build AI-powered products. By leveraging the strengths of both approaches, we can create AI systems that are powerful, transparent, and inclusive. And by working together to evaluate and mitigate bias and improve fairness, we can ensure that AI is used responsibly and ethically.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Joe Xavier,

Joe Xavier is Grammarly's Chief Technology Officer. He and his global engineering team are focused on developing innovative writing assistance technologies for an ever-growing user base.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.