- In The Loop AI
- Posts
- OpenAI flags Chinese malicious actors
OpenAI flags Chinese malicious actors
Mistral launches AI coding assistant
In Today’s Issue:
OpenAI flags Chinese malicious actors
Mistral launches AI coding assistant
Read time: 3 minutes
OpenAI flags Chinese malicious actors

OpenAI reports a rise in Chinese groups using ChatGPT for covert influence and cyber operations, though activities remain small in scale and audience. The findings, released Thursday, include use cases such as generating politically charged social media posts, aiding cyberattacks through script modification and brute-force tools, and spreading divisive U.S. political content using AI-generated personas. One banned operation included posts criticizing Taiwan, Pakistani activists, and U.S. tariffs. OpenAI says these actors also leveraged ChatGPT for open-source research and automating social media. China’s foreign ministry has not commented on the report. OpenAI continues publishing such findings as concerns grow over generative AI misuse.
Mistral launches AI coding assistant

French AI startup Mistral has launched Mistral Code, a “vibe coding” assistant aimed at competing with Windsurf, Cursor, and GitHub Copilot. Based on the open-source Continue project, Mistral Code bundles Mistral’s in-house models with an IDE assistant, enterprise tools, and deployment options across cloud, reserved capacity, or air-gapped on-prem GPUs. Available in private beta for JetBrains and VS Code, Mistral Code supports over 80 languages and tasks like code completion, search, and multi-step refactoring. It uses Mistral’s Codestral, Codestral Embed, Devstral, and Mistral Medium models. Clients including Capgemini, Abanca, and SNCF are already using the tool in production. Enterprises can fine-tune models and access admin tools for observability and usage analytics. Mistral plans to continue developing the platform and contribute to the Continue project.
Reply