Vibe coding – the practice of plugging plain text prompts into Gen AI tools to create new software – represents both a promising development in AI-driven software creation and a potential for serious problems. Harnessing either highly generalised large language models (LLMs) or more specialised coding platforms like Cursor or Tempo Labs, vibe coding can radically speed up the creation, prototyping and debugging of a product or tool. Even so, AI-generated code often comes pockmarked with mistakes or vulnerabilities, introducing new security risks and a new source of paranoia for beleaguered SOCs. 

This risk should not be dismissed. Vibe-coding should go hand in hand with vetting procedures for relevant apps that weed out vulnerabilities, even at the risk of undermining the innate sense of freedom that accompanies creating new software out of seemingly whole cloth. This new regime should include regular code reviews and a system for monitoring the progress of all vibe-coded projects from start to finish, with clear guidelines on how to use them securely.

Purpose-built solutions to tackle complexity

CIOs and CISOs should also step back and consider the implications for newfound complexity in the stack that accompanies the introduction of AI systems. To mitigate these issues in relation to vibe-coding, projects generated using this method should, initially, be limited to non-critical, smaller projects. A clear and simple intent will lead to clearer plain-text prompts plugged into the AI, and breaking a project into small, testable parts will ensure it stays on track. 

Finally, companies considering an embrace of vibe coding platforms should choose their tools wisely. Prioritising reliable data and using trusted, purpose-built AI technologies will help businesses achieve faster value while maintaining security as a top priority.

Transparency is key for effective compliance in vibe coding

The other key risk for businesses when vibe coding is heightened regulatory concerns. AI-generated code may bypass essential privacy and security protocols, complicating efforts to comply with regulations like GDPR. 

Ensuring automation is secure and compliant with data protection laws is critical. As the use of vibe coding grows, businesses must continue to assess and monitor their AI systems’ performance regularly. They must stay agile and adapt to evolving regulatory frameworks to ensure compliance.

Transparency is key. For compliance reasons, all vibe-coded projects should be carefully classified to showcase what they do, where their data is from and where it’s stored. Always document sources, check licenses, and use open datasets.

Relying solely on AI tools is not enough. Human oversight is essential throughout the development process. Without proper oversight, organisations risk significant legal consequences, as tracking security and compliance gaps within AI-driven systems can be particularly challenging.

By taking a proactive approach and working with trusted partners, businesses can successfully navigate the security challenges that accompany working with emerging AI technologies. The goal should be to develop a strategy that balances speed with caution, allowing companies to embrace the benefits of AI while minimising the risks.

Success lies in understanding the security risks and creating systems that prioritise both monitoring and compliance. This will ultimately boost productivity and customer satisfaction. 

Sergii Ostryanko is an ISV Technical Expert at ABBYY

Read more: The AI-enabled company of the future will need a whole new org chart