Why and how to create corporate genAI policies


As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

Previous Story

[Audio] Episode 286 — Matt Stankiewicz on the Ripple Decision and Celsius CEO Indictment

Next Story

“Snakes in airplane mode” – what if your phone says it’s offline but isn’t?