AI Autofixes
Data Processing
To use this feature, business customers must first opt-in by enabling the feature within Workspace Settings. When this feature is active, we will attempt to generate suggested code fixes for static analysis issues in your codebase. To do so, we will send the following data to OpenAI:
- The path and content of the file where the issue was located
- The details of the static analysis issue that was identified including the location and message
This feature uses a large language model (LLM) from our subprocessor OpenAI. Data shared with OpenAI through use of this feature will not be used to train their models.
Disabling the Feature
This feature is disabled by default and requires an opt-in to enable. If you would like to disable this feature, you can do so in your Workspace Settings.
Security Considerations
AI-generated code fixes use large language models (LLMs) that construct prompts from user-controlled data, including file contents, file paths, and linter messages. This creates a potential risk of indirect prompt injection, where malicious content embedded in analyzed code or linter output could influence the model’s suggestions.
Prompt Injection Risk
When generating fixes, the LLM receives:
- The file path and content being analyzed
- The linter tool name and rule key
- The issue message and code snippet
If any of these inputs contain crafted instructions (for example, in code comments or manipulated linter reports), they could potentially influence the AI to generate unexpected or malicious code changes.
Best Practices
Always review AI suggestions before applying. Treat AI-generated fixes as suggestions requiring human review, not as trusted automated changes. Carefully inspect the proposed changes before accepting them.
Avoid automatic application in untrusted contexts. When using the CLI with --fix --ai, fixes are applied automatically without human review. This bypasses the “human in the loop” protection that helps catch potentially malicious suggestions. We recommend:
- Reviewing suggested fixes in Qlty Cloud before using auto-apply
- Using version control to review all changes after auto-apply
- Not using
--fix --aiin automated CI/CD pipelines where fixes would be applied without human review
Understand what --unsafe allows. By default, fixes for certain rules that are more likely to produce incorrect results are blocked. The --unsafe flag allows these fixes, increasing the importance of manual review.