Ad
Skip to content
Read full article about: AI agents in GitHub and GitLab workflows create new enterprise security risks

Aikido Security warns that plugging AI agents into GitHub and GitLab workflows opens up a serious vulnerability in enterprise environments. The issue hits widely used tools like Gemini CLI, Claude Code, OpenAI Codex, and GitHub AI Inference.

According to the security firm, attackers can slip hidden instructions into issues, pull requests, or commits. That text then flows straight into model prompts, where the AI interprets it as a command instead of harmless content. Because these agents often have permission to run shell commands or modify repos, a single prompt injection can leak secrets or alter workflows. Aikido says tests showed this risk affected at least five Fortune 500 companies.

Aikido

Google patched the issue in its Gemini CLI repo within four days, according to the report. To help organizations secure their pipelines, Aikido published open search rules and recommends limiting the tools available to AI agents, validating all inputs, and avoiding the direct execution of AI outputs.

Read full article about: Shopify CEO and ex-OpenAI researcher agree that context engineering beats prompt engineering

Shopify CEO Tobi Lütke and former Tesla and OpenAI researcher Andrej Karpathy say "context engineering" is more useful than prompt engineering when working with large language models. Lütke calls it a "core skill," while Karpathy describes it as the "delicate art and science of filling the context window with just the right information for the next step."

Too little or of the wrong form and the LLM doesn't have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial.

Andrej Karpathy

This matters even with large context windows, as model performance drops with overly long and noisy inputs.

Read full article about: Claude 4 can apparently follow a 60,000-character system prompt

The full system prompt for Claude 4 has been leaked by X user "Pliny the Liberator" and is now available on GitHub. The document, over 60,000 characters long, sets detailed rules for tone, roles, source handling, and banned content. It controls the model at the system level, before any user prompt is processed. I find it strange that large language models often fail to follow short user instructions, yet seem able to follow complex internal prompts like this one. If you know the reason for that, shoot me an email.

Read full article about: Polite prompts can improve AI responses, says Deepmind researcher

Does saying "please" and "thank you" really help when talking to AI? According to Murray Shanahan, a senior researcher at Google Deepmind, being polite with language models can actually lead to better results. Shanahan says that clear, friendly phrasing—and using words like "please" and "thank you"—can improve the quality of a model's responses, though the effect depends on the specific model and the context.

There's a good scientific reason why that [being polite] might get better performance out of it, though it depends – models are changing all the time. Because if it's role-playing, say, a very smart intern, then it might be a bit more stroppy if not treated politely. It's mimicking what humans would do in that scenario.

Murray Shanahan