Session: Typed Python + LLMs: Building Safer, More Accurate AI-Driven Software
As large language models (LLMs) become integral to software development, powering a range of activities from code generation, code review, and automated refactoring, the quality of the code they interact with is more important than ever. For developers using Python the dynamic nature of the language, while flexible, can lead to ambiguous code that is difficult for both humans and AI systems to reason about safely.
This talk explores how adopting typed Python (using type hints and static type checkers like Mypy, Pyrefly etc.) can improve the safety and accuracy of LLM-driven software development. We’ll cover:
- Reducing Ambiguity for LLMs: How explicit type annotations provide LLMs with clearer intent, enabling more precise code generation, completion, and review.
- Safer Automated Refactoring: How typed code allows LLMs to make safer, context-aware changes, minimizing the risk of introducing subtle bugs.
- Feedback Loops: How LLMs can leverage type errors and static analysis feedback to iteratively improve generated code, catching issues before they reach production.
- Case Studies: Real-world examples of how typed Python has enabled safer, more accurate AI-driven development workflows.
Whether you’re a Python developer integrating LLMs into your development process or building tools that rely on AI code understanding, typing is a foundational practice for maximizing both safety and productivity. Join us to learn practical strategies and see the impact of typing on the next generation of AI-powered software engineering.