Animats 16 days ago

I just sent in some comments.

It's too late to stop "deep fakes". That technology is already in Photoshop and even built into some cameras. Also, regulate that and Hollywood special effects shops may have to move out of state.

As for LLMs making it easier to people to build destructive devices, Google can provide info about that. Or just read some "prepper" books and magazines. That ship sailed long ago.

Real threats are mostly about how much decision power companies delegate to AIs. Systems terminating accounts with no appeal are already a serious problem. An EU-type requirement for appeals, a requirement for warning notices, and the right to take such disputes to court would help there. It's not the technology.

  • andy99 16 days ago

    > Systems terminating accounts with no appeal are already a serious problem.

    Right, there is no issue with how "smart" ML models will get or whatever ignorant framing about intelligence and existential risk gets made up by people who don't understand the technology.

    The real concern is dumb use of algorithmic decision making without recourse which is just as valid whether it's an if statement or a trillion parameter LLM.

  • mquander 15 days ago

    According to the bill, a model has "hazardous capabilities" if and only if the existence of the model makes it "significantly" easier to cause the damages the bill covers. If Google is equally good at telling you how to build a bomb and Photoshop is equally good at producing deepfakes, then the bill takes no issue with your LLM.

  • mhuffman 16 days ago

    >As for LLMs making it easier to people to build destructive devices, Google can provide info about that. Or just read some "prepper" books and magazines. That ship sailed long ago.

    However, those can and are tracked. The thing making them nervous is the ability to do that on your own with no possible way for someone to track or catch you. Same with deepfakes. They don't care if you are doing it with photoshop, because that can be reviewed. They care that you can do it and not be caught/stopped/punished for it.

    • AnarchismIsCool 16 days ago

      Ok this is insanity. The thing keeping people from making destructive devices is the difficulty in synthesizing white fuming nitric acid and similar required precursors. You can Google wherever shit you want if you use someone else's wifi and you should be able to at least Google whatever you want without the feds showing up.

      The danger of ai has nothing to do with what the average Joe might try to do, it has everything to do with what soulless corporations are doing to you right now and how it enables them to be even worse in the future.

      Right now your roof is being scanned by aircraft with cameras and AI is being used to determine how old it is and if there are tree branches nearby. They're also looking at and classifying objects in your back yard to determine safety risks. It's not horribly accurate but because of the scale it doesn't matter to the companies, you just get fucked. Accidentally bag something you didn't scan at the self checkout? They have AI for that too, there are multiple reports of people being hunted down and charged with theft for simple mistakes.

      Your chances of having your life ruined of degraded because of AI are massively higher than your chances of being hurt by a random individual using it to build destructive devices.

      • robotnikman 16 days ago

        Couldn't have worded it any better myself. AI is already being used for horrible things by companies and state actors and no one bats an eye over it.

        • mhuffman 16 days ago

          Individuals are usually the targets of companies and the government, whereas the government is immune from any blowback for with it does and has, right now, very cozy relationships with large corporations.

      • fragmede 16 days ago

        > The danger of ai has nothing to do with what the average Joe might try to do

        Why not both? With the story of a high school principal being framed by a coworker who deep faked a racist anti-semitic rant that the principal didn't say, I'd say the danger of AI also has to do with what an average Joe that wants to cause you harm can do. That doesn't diminish the threat from corporations, but a jilted lover can now ruin your life in additional ways.

        https://www.washingtonpost.com/dc-md-va/2024/04/26/baltimore...

        • AnarchismIsCool 16 days ago

          In the case of the example, it didn't work, but every day people are being dropped from their homeowners/car/health insurance.

          Yes there are dangers there but they ultimately come down to evidentiary standards. We can't do the thing we always do where all risk is perceived based off of extremely rare incidents so we destroy everyone's privacy while the stuff actually harming people at scale is ignored.

    • jkuli 16 days ago

      Is this true?

      • mhuffman 16 days ago

        This is a snippet from OP site:

        >SB 1047 creates an unaccountable Frontier Model Division that will be staffed by EAs with police powers, and which can throw model developers in jail for the thoughtcrime of doing AI research. It’s being fast-tracked through the state Senate. Since many cloud and AI companies are headquartered in California, this will have worldwide impact.

        Of course that is scare propaganda, but when you put it with what the Federal govt is doing here[0], it makes it pretty clear that the real worry is people have access to "dangerous" information with no oversight. I can imagine policing agencies at every level getting very nervous with lone-wolf or tiny militia types getting access to information without any triggers flipping and alerting them and with no way to get any evidence if they do want to arrest them for something.

        [0]https://www.msn.com/en-us/news/us/us-homeland-security-names...

        • Animats 15 days ago

          It's somewhat exaggerated, but the bill definitely creates a "Frontier Model Division" with a rather vague charter and some enforcement authority.

        • jkuli 16 days ago

          The evidence is the damage caused by their actions. No need to punish people for crimes that haven't occurred.

          • mhuffman 16 days ago

            I am not making a judgement here, just stating the reasons why they would want to do it.

jph00 16 days ago

I've written a submission to the authors of this bill, and made it publicly available here:

https://www.answer.ai/posts/2024-04-29-sb1047.html

The EFF have also prepared a submission:

https://www.context.fund/policy/2024-03-26SB1047EFFSIA.pdf

A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm. But of course, it's impossible to control what someone else does with your model -- regardless of how you train it, it can be fine-tuned, prompted, etc by users for their own purposes. Even then, you can't really know why a model is doing something -- for instance, AI security researchers Arvind Narayanan and Sayash Kapoor point out:

> Consider the concern that LLMs can help hackers generate and send phishing emails to a large number of potential victims. It’s true — in our own small-scale tests, we’ve found that LLMs can generate persuasive phishing emails tailored to a particular individual based on publicly available information about them. But here’s the problem: phishing emails are just regular emails! There is nothing intrinsically malicious about them. A phishing email might tell the recipient that there is an urgent deadline for a project they are working on, and that they need to click on a link or open an attachment to complete some action. What is malicious is the content of the webpage or the attachment. But the model that’s being asked to generate the phishing email is not given access to the content that is potentially malicious. So the only way to make a model refuse to generate phishing emails is to make it refuse to generate emails.

Nearly a year ago I warned that that bills of this kind could hurt, rather than help safety, and could actually tear down the foundations of the Enlightenment:

https://www.fast.ai/posts/2023-11-07-dislightenment.html

  • zer00eyz 16 days ago

    > A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm.

    Build a model that is trained on the corpus of gun designs.

    Should be an interesting court case and social experiment.

    • Dalewyn 16 days ago

      I was about to say how quick a lot of people are to blame the gun (the tool) and the manufacturer, but when it comes to "AI" (the tool) suddenly they turn 540 degrees and blame the user.

      The reasonable take of course is that the tools are never to blame, they are just tools after all. Blame the bastard using the tools for nefarious ends, whether it's guns or "AI" or whatever else the case may be.

      • zer00eyz 16 days ago

        World Trade Center bombing 1993

        The Oklahoma City bombing 1995

        Most people with a high school level of chemistry and a trip to the library can cause a lot of damage.

        David Hann https://en.wikipedia.org/wiki/David_Hahn the radioactive Boy Scout single handedly created a superfund site.

        This will quickly turn into a first amendment case and die in court I would think.

        • janalsncm 16 days ago

          Typically it also requires precursor materials.

          I am against laws and systems of government that turn me into a suspect just for learning information. It is not ok that a secret investigation into our private lives is triggered simply by being curious.

pcthrowaway 16 days ago

This bill sounds unbelievably stupid. If passed, it will just result in a migration of AI projects out of California, save a few which are already tied to the EA movement.

I'm not under the impression that the EA movement is better suited to steward AI development than other groups, but even assuming they were, there is no chance for an initiative like this to work unless every country agreed to it and followed it.

  • mquander 16 days ago

    The bill doesn't give special treatment to "EA" models, so what does it matter whether projects are tied to EA or whether EAs are good stewards? Either it's a good law or it isn't.

    At a glance it looks like it's not going to affect AI projects that are basically consumers of existing models, which is most projects.

    • sangnoir 16 days ago

      I suspect the "EA" label is author having an ax to grind or throwing red meat at people who already hate EA. Sam Altman is on record lobbying for "AI safety laws" which would have a similar effect of raising the bar of entry incredibly high using legal peril, and he was reportedly ousted from the board by EA-aligned folk.

      • ShamelessC 15 days ago

        With all due respect, the people on HN are _weirdly_ in favor of a group whose unstated “zeroth” tenet is basically “be born into wealth, get extremely lucky, disregard common criticism of capitalism, reframe libertarianism if you need. Then and only then can you begin your righteous mission of donating your money “effectively”.

        Could you at least consider that the group’s entire premise seems like nothing more than the post hoc rationalization of a bunch of wealthy educated elites with low social and emotional intelligence? The levels of tone deaf I perceive as someone who doesn’t have, can’t have that much wealth to even begin my journey in their little club are enormous.

        There’s very little that is subtle about it and it’s frankly offensive and _clearly_ used as a justification for insecure Bay Area “liberals” who find themselves with lots of money and a political identity that makes them insecure about that fact.

        The answer? You’re actually saving mankind with your money! It’s just simple Bayesian logic! The same thing that got you here! (Spoiler alert: it was more to do with luck than skill).

        If you’re on board, I guess it’s not as offensive? But for me and others, it’s like elites trying to brag about how great they are while the rest of us fight for scraps.

        So when you see someone with an axe to grind, maybe consider that EA’s messaging is not as universally appealing as you think, and may even be outright tone deaf enough to cause one to reasonably find it disgusting.

        > and he was reportedly ousted from the board by EA-aligned folk.

        It seems naive to me to assume that members of EA in positions of power don’t secretly have their own motivations. Furthermore it’s not very “Bayesian” to assume that a implies b here with so many hidden variables at play.

        • sangnoir 15 days ago

          > With all due respect, the people on HN are _weirdly_ in favor of a group whose unstated “zeroth” tenet is basically “be born into wealth

          I'm not sure if this was directed at my comment, but I'll clarify that my comment wasn't in defense of EA. My position is: setting up a regulatory moat is not a strategy exclusive to EA. Many incumbents - including sama - are overtly (and likely coverly) attempting regulatory capture, regardless of their 'politics'

    • fragsworth 16 days ago

      > At a glance it looks like it's not going to affect AI projects that are basically consumers of existing models, which is most projects.

      If it affects the base projects (especially the open source ones like Llama) then it affects the consumers. And it certainly looks like it's planning to affect the base projects, in a lot of negative ways.

      If this bill passed in any way remotely similar to what it is now, Meta would have to entirely stop releasing open source Llama updates.

      Which is perhaps the intent of the legislation.

    • pcthrowaway 16 days ago

      I mean I was just going off the second sentence of the article, my bad:

      > SB 1047 creates an unaccountable Frontier Model Division that will be staffed by EAs with police powers, and which can throw model developers in jail for the thoughtcrime of doing AI research

      If the bill says nothing about who will be staffing this agency, and there are indeed no ties to EA (which seems unlikely to me if EA is behind the bill), then the author of the article is doing us a disservice by misrepresenting it.

  • fulafel 13 days ago

    Regarding "otherwise they'll just move" - yes it's a collective action problem. But usually the solution to these is not just to give up, if it's a big problem. The international policy approach is usually to DTRT at home and work on treaties internationally (or other policy tools, stick and carrot etc).

    Also, from a moral philosophy point of view, "this is wrong but if we don't do it, someone else will collect the profits and the bad thing will happen anyway - hence we'll do this wrong thing" is not sound. Applied generally it would lead to a lot of bad stuff (moral race to the bottom).

    • pcthrowaway 13 days ago

      I generally agree, but really think AI is different with regards to the stakes being (at least viewed by many as) higher than any other field of research.

      Whether or not you believe this is the case with AI is not what I'm getting at, but a lot of people believe AI has the potential to cause more damage than nuclear weaponry technology, as well as the potential to suspend aging, end all natural causes of death, regrow limbs, fix any bodily ailments.

      There will be countries that embrace AI out of potential for the latter, and capitalists who do their capitalist thing and make as much money off of the perceived latent power.

      There is no prospect of achieving a truly international consensus (meaning applied the same way by every country, we don't even have that with copyright, though that's the closest example I can think of) on AI policy. Given that, do you really think the U.S. is going to sit by and watch as the majority of AI research is conducted in other countries?

  • echelon 16 days ago

    Honestly it would be good for AI if it left California.

    California has too much regulatory burden and taxation.

    • kbenson 16 days ago

      My initial interpretation of this is along the lines of "this very contentious thing that many people are afraid will cause lots of problems if not handled carefully with checks and balances should move out of the current place it's generally being done that cares a lot about and puts a lot of checks and balances into place, because they have too many."

      Am I jumping to conclusions and is there a different interpretation you think I should be coming away with?

      • AnthonyMouse 16 days ago

        That interpretation isn't necessarily unmeritorious. Suppose you have a place with Level 7 checks and balances and people are content to live under them. If you dial it up to 9, then they move to a place at Level 2, which otherwise wouldn't have been worth it because of other trade offs. So the new rules don't take you from Level 7 to Level 9, they take you from Level 7 to Level 2.

        But there is also another interpretation, which is that the new thing is going to happen in whatever place has the least stringent rules anyway, so more stringent rules don't improve safety, they just deprive your jurisdiction of any potential rewards from keeping the activity local, and provide people in other jurisdictions the benefit of the influx of people you're inducing to leave.

        • kbenson 16 days ago

          So, obviously there is also interpreting that statement in a vacuum, which I sort of did, and interpreting it as a criticism of this current proposal. I think it's slightly ambiguous what was meant, even given it's location, because it seemed like a general sentiment and not specific to this proposal.

          I'm not sure in the general context your first came is super applicable, given that there's a lot of exposure and worry about this topic. Laws and regulation can apply to use as well as development, and large markets can have outsized effects when they require things (such as how CA emissions laws and GDPR have), and I'm not sure worrying about chasing away business is a worthwhile concern when many people are very afraid of a societal consequences of the thing in question.

          For what it's worth I don't really follow the same stance when it comes to military technology, because that's meant to be used in a situation when local (which in that case can be national) laws have little or no sway, but I'm open to arguments about how viewing them differently isn't useful.

    • coffeebeqn 16 days ago

      Why did it happen in the Bay Area in the first place then? People love to hate on CA but it sure seems to keep producing interesting products that the rest of the world cannot.

      • janalsncm 16 days ago

        People that complain about “regulatory burden” and “taxation” don’t have any specific complaints, just vague platitudes that may or may not be true depending on what you’re talking about.

        It doesn’t make any sense to talk about the number of regulations. What matters is what those regulations are. Likewise, it doesn’t make sense to talk about the amount of taxation without talking about who is being taxed.

interroboink 16 days ago

I feel like the legal definition of "AI Model" is pretty slippery.

From this document, they define:

    “Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.
That's pretty dang broad. Doesn't it cover basically all software? I'm not a lawyer, and I realize it's ultimately up to judges to interpret, but it seems almost limitless. Seems like it could cover a kitchen hand mixer too, as far as I can tell.
  • maxerickson 15 days ago

    If mixing cookie dough under pretty direct user supervision is inference, what possible definition could be clear enough for you?

    • interroboink 15 days ago

      It's a good mental exercise to go through. Does the behavior of a small handful of AND/OR-style logic gates constitute "inference" (like the simple circuitry in a hand tool)? What about hundreds? Millions? At some point in this exercise we will have built a ML model, so where do we draw the line?

      My point was that it's a spectrum, and the law doesn't seem to give guidance on where to draw the line on that spectrum. The hand mixer was just a clearly absurd example on the far opposite end of it, to show its breadth.

      So back to your question; some improvements in my mind might be:

      (1) Don't phrase this as an AI topic at all. Make laws about the safety of automated systems of all kinds which have health & safety implications — we already have lots of laws for cars (whether AI driven or not), medical equipment, and yes, even kitchen appliances (: Then, the definition of "AI Model" is irrelevant.

      (2) If we do want something specific to AI, then the definition should be more specific. The definition could involve it being a stochastic process (unlike much other software), having inner workings that are poorly understood even by experts (unlike much other software), and whose logic is developed statistically from training data rather than being hand-designed (unlike much other software (or kitchen appliances!)).

      • maxerickson 14 days ago

        If there is a single mapping from each set of inputs to one set of outputs (and no variation), then no, it isn't inference.

        If the user stops the process when they want (instead of the process self regulating), it's hard to argue the process is doing anything in isolation.

        • interroboink 14 days ago

          So by that logic, if we run a ML model with a consistent pseudorandom seed each time (so the output is deterministic), then it is not doing inference?

_heimdall 16 days ago

Anyone have a link to a less biased explanation of the bill? I can't take this one too seriously when it baselessly claims people will be charged with thought crimes.

  • s1k3s 16 days ago

    Why not read the bill itself? https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml...

    It's not that big

    • thorum 16 days ago

      The bill only applies to new models which meet these criteria:

      (1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.

      (2) The artificial intelligence model was trained using a quantity of computing power sufficiently large that it could reasonably be expected to have similar or greater performance as an artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024 as assessed using benchmarks commonly used to quantify the general performance of state-of-the-art foundation models.

      …and have the following:

      “Hazardous capability” means the capability of a covered model to be used to enable any of the following harms in a way that would be significantly more difficult to cause without access to a covered model:

      (A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.

      (B) At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.

      (C) At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human.

      (D) Other threats to public safety and security that are of comparable severity to the harms described in paragraphs (A) to (C), inclusive.

      …In which case the organization creating the model must apply for one of these:

      “Limited duty exemption” means an exemption, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative model that a developer can reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications.

      • jph00 16 days ago

        Pretty much all models, including today's models, already fall foul of the "Hazardous capability" clause. These models can be used to craft persuasive emails or blog posts, analyse code for security problems, and so forth. Whether such a thing is done as part of a process that leads to lots of damage depends on the context, not on the model.

        So in practice, only the flops criteria matters. Which means only giant companies with well-funded legal departments, or large states, can build these models, increasing centralization and control, and making full model access a scarce resource worth fighting over.

        • andy99 16 days ago

          Really I feel the opposite way, that none of today's models or anything foreseeable meets the hazardous capability criteria. Some may be able to provide automation but I don't see any concrete examples where there's any actual step change in what's possible due to LLMs. The problem is it's all in the interpretation. I imagine some people will think that because a 7B model can give a bullet point list of how to make a bomb (step 1: research explosives) or write a phishing email that sounds like a person wrote it that it's "dangerous". In reality the bar should be a lot higher, like uniquely making something possible that wouldn't otherwise be, with concrete examples of it working or being reasonably likely to work, not just the spectre of targeted emails.

          I've been actually thinking there should be a bounty for a real hazardous use of AI identified. The problem would be defining hazardous (which would hopefully itself spur conversation). On one end I imagine trivial "hazards" like what we test models with today (like asking to build a bomb) and on the other it's easy to see there could be a shifting goalposts thing where we keep finding reasons something that technically meets the hazard criteria isn't reall hazardous.

      • yellow_postit 16 days ago

        very similar to what the Whitehouse put out [1] in terms of applicability being based on dual use & size. It is hard not to see this as a push for regulatory capture, specifically trying to chill open source development in favor of some well-funded industry closed-source groups which can adhere to these regulations.

        A harms-based approach, regardless of the model used, seems more able to be put into practice.

        [1] https://www.whitehouse.gov/briefing-room/presidential-action...

      • janalsncm 16 days ago

        I don’t understand how a law can expect someone to foresee and quantify potential future damage. I understand the impetus to hold companies responsible, but that is simply impossible to know.

      • purlane 16 days ago

        This sounds entirely reasonable!

        • 65a 16 days ago

          640kb should be enough for anyone!

    • _heimdall 16 days ago

      Will do, thanks! I must have just missed it if the original page linked to it.

    • polski-g 16 days ago

      Looks like it's trying to literally regulate speech. This would be struck down per Bernstein v DOJ.

      There is freedom of speech regardless if it's written in English or C.

      • ickelbawd 16 days ago

        I dunno. This one seems trickier to me. Since it’s not really the code that’s the key part of the AI—it’s the trained numerical weights. Can you read and write in this language of floating point numbers? I doubt that. So perhaps yes you can write the code that could train a LLM and you could freely distribute that, but does freedom of speech allow you to train the model weights and distribute those?

        • hellojesus 15 days ago

          Yes. Hence freedom of speech. You're allowed to write them in a book and sell the book if you must.

      • _heimdall 16 days ago

        Writing code absolutely does not fall under free speech. Neither does any product development. Ford isn't allowed to ignore seat belt requirements and claim the government is infringing on the designers' freedom of speech/expression.

        • carbocation 16 days ago
          • _heimdall 16 days ago

            That case was extremely specific and doesn't mean that all code written falls under free speech.

            It was also tried in a very different time. Given that we can't even allow free speech on digital platforms today, I'm not sure that many courts would allow for free speech claims to fall under the first amendment.

        • jkuli 16 days ago

          Haha, confidently incorrect. You made that up didn't you. Absolutely times infinity.

  • gedy 16 days ago

    Briefly:

    - Developers must assess whether their AI models have hazardous capabilities before training them. They must also be capable of promptly shutting down the model if safety concerns arise.

    - Developers must annually certify compliance with safety requirements. They must report any AI safety incidents to a newly created Frontier Model Division within the Department of Technology.

    - Cluster Operation Regulation: OOpolicies to assess whether customers intend to use the cluster for deploying AI models. Violations may lead to civil penalties.

    - A new division within the Department of Technology will review developer certifications, release summarized findings, and may assess related fees.

    - The Department of Technology will establish a public cloud computing cluster named CalCompute, focusing on safe and secure deployment of large-scale AI models and promoting equitable innovation.

    • _heimdall 16 days ago

      I don't work on ML directly so I'm definitely coming at it from a more general CS angle, but I wouldn't feel comfortable anywhere near the first two bullets.

      My outsider's understanding is that we really don't know specifically how the models learn what they learn or why they give specific answers. Is it possible that we could even know whether a model could present hazardous capabilities prior to training it? Or after it for that matter?

      • gedy 16 days ago

        yeah, my problem with a lot of these people pushing for “regulating“ AI is that they seem to always assume the organizations they are dealing with are big, well-funded corporations. But this will affect individual or open source developers in California just as well. It’s a really dumb proposal, hopefully it doesn’t go through.

hackermatic 16 days ago

I encourage people to look for a variety of opinions on this bill -- and its various parts -- so you can better figure out which parts you actually want to keep, change, or remove, and give your legislators that specific feedback.

Alliance for the Future is a lobby group of effective accelerationists who endorse some of Marc Andreesen and Peter Thiel's views in their manifesto, and based on that plus this article, they seem to oppose the bill entirely.

A place to start for a breakdown of what's in the bill is the Context Fund analysis that AFTF links to. That analysis cites similar critiques from EFF, the Software & Information Industry Association, and others. All of these are from the perspective of voting against or substantially changing the bill.

I haven't found "pro bill" opinions as easily, but I haven't been plugged into the conversations around this, so I'm missing anything that doesn't appear on the first few pages of Google or DDG.

elicksaur 16 days ago

I’ll happily support regulation of the space when the bill writers of these proposals stop using definitions of “artificial intelligence” that could reasonably be construed by a lawyer to cover literally any computer program.

> (b) “Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.

Imnimo 16 days ago

>(2) “Hazardous capability” includes a capability described in paragraph (1) even if the hazardous capability would not manifest but for fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities.

So if I hand-write instructions to make a chemical weapon, and aggressively "fine-tune" Llama 7B to output those instructions verbatim regardless of input, Meta is liable for releasing a model with hazardous capabilities?

  • thorum 16 days ago

    The text says “in a way that would be significantly more difficult to cause without access to a covered model” and in another place mentions “damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human” so that probably doesn’t count. Though it might be open to future misinterpretation.

    • Imnimo 16 days ago

      I don't agree with that reading. As long as my custom chemical weapon instructions are not publicly available otherwise, then it is surely more difficult to build the weapon without access to the instructions.

      The line about autonomous actions is only item C in the list of possible harms. It is separate from item A which covers chemical weapons and other similar acts.

      • simonh 16 days ago

        If someone is so keen on doing something in a way that’s illegal that they go to all that trouble specially to get in trouble with the law, maybe that’s up to them.

        • Imnimo 16 days ago

          It's not the person who does the fine-tuning I'm worried about, it's the person who releases the base model who the law also makes liable.

          The point is that, because fine-tuning can trivially induce behavior that satisfies the standard for "hazardous capability" in any model, the law effectively makes it illegal to release any covered model.

        • jph00 16 days ago

          You're missing the point. Liability here would also fall on the open source developer who created a general purpose model, which someone else then went on to fine-tune and prompt to do something harmful.

protocolture 16 days ago

Question. What happens if I write a piece of software that is harmful that doesnt have the AI label.

It seems dumb to have a separate classification for harms caused by trained AI models. The training aspect doesnt seem to limit liability at all. A judge might rule differently, but thats why the justice system is built such as it is, to make intelligent decisions based on the specific facts of a case.

I am betting that software that causes some significant harm is already outlawed. So this whole thing is just a waste of time.

  • carbocation 16 days ago

    The substance of my comment to Sen Wiener hinges on a similar point to what you raise here.

    It's incredibly difficult to imagine doing a good job of regulating model training, especially in a few years when the available flops are high enough that this limit is being hit often.

    It's much more straightforward to regulate actions: constructing WMD is illegal, synthesizing drugs is illegal, etc.

    If the state wants to tighten up its laws about various activities, go for it. That's the right place to act. Injecting itself into the model training process seems very unlikely to yield any substantive benefits and very likely to hinder progress.

cscurmudgeon 16 days ago

> A developer of a covered model that provides commercial access to that covered model shall provide a transparent, uniform, publicly available price schedule for the purchase of access to that covered model

Interesting, we don't have transparent, uniform, publicly available price schedule for healthcare and other basic needs (electricity, e.g. see PGE).

Something is fishy here.

andy99 16 days ago

I agree with this (the call to action, not the act) and will try and respond and share it, but it's a lobby group right ("Alliance for the Future"), I'd like to know who is funding it and a bit more about it.

  • stefan_ 16 days ago

    [flagged]

    • jph00 16 days ago

      What's with the ad-hominem? I can't see where you're getting that from at all. The folks involved in this lobby group are listed here:

      https://www.affuture.org/about/

      • andy99 16 days ago

        I'd be much more interested in where the money is from than who the people are.

        Not trying to attack anyone and at some level I don't care because I agree with the cause but I can still imagine benefactors that would make the whole thing look bad.

phkahler 16 days ago

It would be really helpful if folks like Sam Altman and Elon would STFU about dangers and claims of AGI or better in the next months.

If you're actually worried about AI we need to ban any generative AI that can replicate a specific person's voice or appearance. Beyond that I don't see any immediate danger.

  • coffeebeqn 16 days ago

    A cynical take is that they are just generating hype and investment in their companies. We’re getting close to AGI so better throw a few billion our way before you miss out - is a nice pitch.

    How long has Musk been promising full self driving in the next X months (while making people pay $10k for it)? Anyone taking his word as anything close to reality is a fool

    • phkahler 15 days ago

      >> A cynical take is that they are just generating hype and investment in their companies. We’re getting close to AGI so better throw a few billion our way before you miss out - is a nice pitch.

      Well Sam literally asked for 7 trillion dollars to build out the semiconductor industry enough to support AI. That implies AI is worth that kind of investment. He's a smart guy, but I saw the video and IMHO he really sounded foolish in that moment.

      Let the hype continue...

throwing_away 16 days ago

Slow down there, California.

Florida is growing too fast as it is.

jkuli 16 days ago

I'm unable to register. This is GME stonks all over again. It takes less than 1 second to process an account. There are 18,000 seconds in five hours. There must be a lot of comments that they don't agree with. Maybe they shut it down to protect humanity from extinction?

nonplus 16 days ago

I guess I think we should hold models used for non-academic reasons to a higher standard, and there should be oversight.

I don't know if all the language in this bill does what we need, but I'm against letting large corporations like a META or X live test whatever they want on their end users.

Calling out derivative models are exempt sounds good; only new training sets have to be subjected to this. I think there should be an academic limited duty exemption, models that can't be commercialized likely don't need the rigor of this law.

I guess I don't agree with affuture.org and think we need legislation like this in place.

  • jph00 16 days ago

    It sounds like you do agree with affuture.org though. The proposed draft does not hold models used for non-academic reasons to a higher standard, and "models that can't be commercialized" are covered by it. It will be far harder to academics to work on large models under this draft.

    • nonplus 15 days ago

      > It sounds like you do agree with affuture.org though.

      From the post. > We need your help to stop this now.

      I do not want to stop this bill; I want it revised. If that is not understood, then I suspect my primary goal for safer, less biased models is at odds with your primary goal of unregulated innovation.

      At least if we continue to discuss this as a binary where agreeing with this affuture.org post means killing this legislation (as the post asks for) and not replacing it.

carbocation 16 days ago

I think this advice is incomplete. For those of us who live in California, shouldn't we be contacting our representatives?

synapsomorphy 16 days ago

I don't think this bill would be that effective, but I do feel that if we as a species don't do something drastic soon, we won't be around for a whole lot longer.

And I'm not sure if it's even possible to do something drastic enough at this point - regulating datacenters would just make companies move to other countries, just like this would probably just make companies move out of CA.

  • squigz 16 days ago

    > we won't be around for a whole lot longer.

    Why do you think that?

    > regulating datacenters would just make companies move to other countries

    To say nothing of the potential issues regarding free society going down this route will yield - and has arguably already yielded.

    • synapsomorphy 15 days ago

      To me, AI is clearly on a trajectory to be more intelligent than humans in a few decades or less, even if it isn't that smart now - the funding and talent going into it these days is just crazy and there is no fundamental difference between the logic of our brains and transistors. I personally don't believe humans can coexist with a superintelligence.

      I agree with your point and most other points about the negatives of regulating compute but like, if the other side of the scale is species-level genocide, does any of it matter?

      • squigz 15 days ago

        > if the other side of the scale is species-level genocide, does any of it matter?

        Possibly not, but I have to disagree with your assessment of our future. It doesn't seem as clear to me that we couldn't co-exist with an AI superintelligence (putting aside any arguments about what that even means for now)

  • xbar 16 days ago

    This bad law is not a substitute for your possibly-good law.

    Bad lawmakers commit this fallacy all the time.

    Write your good law idea down and send it to a lawmaker who will act.

johnea 16 days ago

[flagged]

  • choilive 16 days ago

    I can't tell if this is satire. If so, good one. If not - you should think about what at least what the second order effects would be here

s1k3s 16 days ago

The article suggests that this act will effectively destroy any open source AI initiative in California. After reading the act, this seems to be the correct assumption. But, is Open Source AI even a thing at this point?

By the way, this is how the EU does things and that's why we're always behind on anything tech :)

  • AnthonyMouse 16 days ago

    > is Open Source AI even a thing at this point?

    What do you mean? You can download llama.cpp or Stable Diffusion and run it on your ordinary PC right now. People make variants using LoRA adapters and things with relatively modest resources. Even creating small specialized models from scratch is not impossibly expensive and they often outperform larger generalized models in the domain they're specialized for.

    Creating a large model like llama or grok takes a lot of resources, but then it's entities with a lot of resources that create them. Both of those models have open weights.

    • s1k3s 16 days ago

      > (j) (1) “Developer” means a person that creates, owns, or otherwise has responsibility for an artificial intelligence model.

      For as long as you don't distribute the model and you only use it for yourself, you don't fall under this definition (if I understand correctly).

      • AnthonyMouse 16 days ago

        Open source implies that you are distributing it.

        • s1k3s 16 days ago

          Your comment implies you're not distributing it, you're using something that was distributed to you [in this case by META].

          If you would distribute a mod of the model you would fall under the restrictions of this bill. Which is why I asked the original question: are people even doing this?

          • AnthonyMouse 16 days ago

            But then Meta is distributing it. And if you modify it in a way that others may find useful, you might also like to distribute your modifications.

  • OKRainbowKid 16 days ago

    This is also why we have actually meaningful consumer protections in place.

    • s1k3s 16 days ago

      Can you give me an example of effective consumer protection?

      • xbar 16 days ago

        The food safety laws of 1906 and the safe cosmetics laws of 1935.

        Seriously, the death and destruction caused by lead, morphine and mercury in everyday things was not a joke.

      • noodlesUK 16 days ago

        Yes, consumer rights in Europe are great for buying various goods and services!

        For example, the EU regs for flight delays are a great example of consumer protection that is actually beneficial. You get paid cash compensation (which often exceeds the face value of the ticket) if you’re delayed more than a certain amount.

        • karaterobot 16 days ago

          FYI, you are entitled to a refund in the U.S. if your flight is delayed significantly. As of last week, it's 3 hours for domestic flights, or 6 hours for international. The refund is automatic. It's been true for a long time that people could get refunds for significant delays, but the definition of how long "significant" means was not defined.