Highest Rated Comments


tomgoldsteincs189 karma

Why can’t these patterns created just be added to the training data, so it will look for someone wearing that sweater?

Adversarial AI is a cat and mouse game. You can certainly add any fixed pattern to the training data, and that pattern will no longer work as an invisibility cloak. However, then you can make a different pattern. There are “adversarial training” methods that can make a detector generally resistant to this category of attacks, but these kinds of training methods tend to result in models that perform poorly, and I think it’s unlikely that any surveillance organization would want to use them at this time.

tomgoldsteincs134 karma

All of the patterns on my cloaks are computer generated. We have tried to do it with hand-crafted patterns, but algorithmically generated patterns are vastly more powerful.

Here's the code for making algorithmically crafted patterns. You can do it yourself!

https://github.com/zxwu/adv\_cloak

tomgoldsteincs83 karma

AI systems are MUCH easier to hack than classical systems.

Classical security researchers focus on software-based vulnerabilities. Examples of high-profile software attacks are things like HeartBleed (https://heartbleed.com/) which result from a programmer making a very subtle mistake. Finding these subtle mistakes inside of huge codebases is really tough. In fact, many software development tools and programming languages exist to automatically check that these kinds of bugs are not present in code before it is deployed.

Artificial neural networks, on the other hand, are a black box with million (or even billions) of parameters that we don't understand. Tools for checking for and removing their security vulnerabilities are in their infancy, and only work to prevent extremely restricted forms of attacks.

While it takes an entire office building of security researchers to occasionally find software-based vulnerabilities (and many nations/militaries have these office buildings), any competent AI programmer can find a vulnerability in an artificial neural network very quickly.

tomgoldsteincs71 karma

Generative AI has already gone quite popular, thanks to open source projects like stable diffusion. I think this technology will continue to mature rapidly.

Diffusion models raise a lot of security questions. For example, are diffusion models "stealing" art and cloning objects from their training data? If so, what are the legal and copyright implications of this? Diffusion models have evolved so quickly that we've arrived at strong generative models without first developing the technical tools for answering these legal questions.

Similar issues exist for generative models for programming code. If the model generates code that is remarkably similar to its training data, does the copyright belong to the model and its creators, or to the original author of the training code? This issue is already being litigate: https://githubcopilotinvestigation.com/

For a technical overview of how diffusion works, and some tidbits on my own research in this field, see this thread...

https://twitter.com/tomgoldsteincs/status/1562503814422630406?s=20&t=sIG3bLkcBG4BbGXF28nClA

tomgoldsteincs54 karma

They used to be for sale - unfortunately the sales platform I relied on left the Shopify network and I had to close my store. This project become unexpectedly popular recently, and I'm planning to get a new store up and running soon. Will post on my website when I do.