Researchers find that a modest amount of fine-tuning can bypass safety efforts aiming to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing toxic content (Thomas Claburn/The Register)
Thomas Claburn / The Register:
Researchers find that a modest amount of fine-tuning can bypass safety efforts aiming to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing toxic content — OpenAI GPT-3.5 Turbo chatbot defenses dissolve with ‘20 cents’ of API tickling — The “guardrails” created to prevent large language models …
http://www.techmeme.com/231015/p4?utm_source=dlvr.it&utm_medium=blogger#a231015p4
Researchers find that a modest amount of fine-tuning can bypass safety efforts aiming to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing toxic content — OpenAI GPT-3.5 Turbo chatbot defenses dissolve with ‘20 cents’ of API tickling — The “guardrails” created to prevent large language models …
http://www.techmeme.com/231015/p4?utm_source=dlvr.it&utm_medium=blogger#a231015p4