โ† Back to Research

Research Area

AI-Assisted Programming

Using large language models as intelligent co-pilots for HPC developers โ€” interpreting compiler reports, migrating legacy code, optimizing scientific applications, and teaching parallel programming. Chunhua's vision: AI doesn't replace compilers, it makes them speak human.

๐Ÿ“– Overview

The advent of large language models (LLMs) has opened fundamentally new possibilities for programming tools. Chunhua's research at the frontier of AI-assisted programming is grounded in a core insight: LLMs are most powerful when paired with compiler analysis. Rather than using LLMs in isolation โ€” where they can hallucinate or misunderstand low-level performance details โ€” his approach uses compiler-derived context (optimization reports, call graphs, data-flow analysis) to ground LLM reasoning in verifiable program facts.

This philosophy manifests across several research directions. CompilerGPT translates opaque compiler optimization diagnostics into actionable developer guidance. HPC-GPT explores GPT-based assistants for HPC programming tasks. Fortran2CPP automates the notoriously difficult task of migrating legacy Fortran scientific codes to modern C++ โ€” a pressing challenge for national laboratories sitting on decades of legacy code. The Reductive Analysis technique (PLDI 2025) provides a principled framework for applying LLMs to large, complex codebases by progressively reducing program context to what's relevant.

Beyond individual tools, Chunhua's work addresses the broader challenge of HPC developer productivity: how do we help the scientists and engineers building tomorrow's exascale applications write better code, faster, with fewer bugs? The answer, he argues, is not to replace programmers with AI but to give them AI tools that understand the specific constraints and idioms of scientific high-performance computing โ€” tools that speak both the language of the developer and the language of the machine.

๐Ÿ“„ Key Publications

2025 PLDI 2025

Reductive Analysis with Compiler-Guided LLMs for Code Optimizations

Chunhua Liao et al.

Introduces a novel methodology for applying LLMs to large, complex programs: reductive analysis progressively strips away program context until the LLM can focus on the relevant optimization opportunity. Compiler analysis guides which reductions are safe and informative. Published at PLDI, the flagship programming languages conference.

2025 C3PO-HPC @ ISC 2025

CompilerGPT: Leveraging LLMs for Compiler Optimization Reports

Chunhua Liao et al.

CompilerGPT feeds compiler optimization remarks into an LLM pipeline that explains why optimizations did or did not fire, suggests code changes, and links to relevant OpenMP or vectorization directives. Dramatically reduces the expertise required to act on compiler feedback.

2024 IWOMP 2024

An Interactive OpenMP Programming Book with LLM Assistance

Chunhua Liao et al.

A novel approach to parallel programming education: using LLMs to power an interactive, adaptive OpenMP textbook that explains concepts, generates examples on demand, and answers follow-up questions. Presented at the International Workshop on OpenMP.

2024 2024

Fortran2CPP: Automating Fortran-to-C++ Migration using LLMs

Chunhua Liao et al.

Addresses the critical national laboratory challenge of modernizing legacy Fortran codes. Uses LLMs to automatically translate Fortran to idiomatic, performant C++, validated against the original code using automated test suites. Tackles array semantics, COMMON blocks, and numerical precision preservation.

2023 SC-W 2023

HPC-GPT: Integrating Large Language Models for High-Performance Computing

Chunhua Liao et al.

One of the first systematic explorations of using GPT-based models for HPC tasks: code optimization, OpenMP directive suggestion, documentation generation, and debugging assistance. Establishes baselines and identifies key limitations of general-purpose LLMs for HPC-specific tasks.

๐Ÿ’ป Software & Tools

๐Ÿ”„

Fortran2CPP

Code Migration Tool

An LLM-powered tool for automatically migrating legacy Fortran scientific codes to modern C++. Handles the distinctive challenges of Fortran-to-C++ translation including array indexing conventions, COMMON blocks, numerical precision, and implicit type coercions.

LLMFortranC++Migration
๐Ÿ’ฌ

HPC-GPT

HPC AI Assistant Research

A research prototype exploring GPT integration for HPC development tasks: OpenMP directive recommendation, performance analysis question answering, and code documentation. Establishes benchmarks for evaluating LLM capabilities on HPC-specific tasks.

GPTHPCOpenMPResearch Prototype

๐Ÿ’ก Impact & Insights

LLMs don't replace compilers โ€” they translate between compilers and humans. A compiler knows what happened; an LLM can explain why it matters and what to do next.
  • Compiler-grounded LLMs dramatically outperform general-purpose LLMs on optimization tasks because compiler analysis provides verified facts about the program โ€” loop trip counts, alias analysis, vectorization blockers โ€” that LLMs alone cannot infer.
  • The Reductive Analysis methodology (PLDI 2025) is a general contribution: any complex program analysis problem can benefit from progressively narrowing context until the LLM can reason reliably.
  • Fortran modernization is one of the most economically significant applications of AI in scientific computing โ€” national laboratories have millions of lines of legacy Fortran that are expensive to maintain and port to new hardware.
  • CompilerGPT democratizes performance engineering: a domain scientist with no compiler background can now understand why their code isn't vectorizing and what pragma to add โ€” without consulting an expert.
  • The interactive OpenMP book represents a new model for technical education in a rapidly evolving field โ€” AI-powered learning materials that adapt to the reader's level and answer questions in real time.