Yesterday TikTok introduced me with what seemed to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and sure, I did instantly suppose “if this silly video is that good think about how unhealthy the election misinformation will likely be.” OpenAI has, by necessity, been fascinated by the identical factor and at present up to date its insurance policies to start to deal with the problem.
The Wall Road Journal famous the brand new change in coverage which have been first revealed to OpenAI’s weblog. ChatGPT, Dall-e, and different OpenAI device customers and makers are actually forbidden from utilizing OpenAI’s instruments to impersonate candidates or native governments and customers can’t use OpenAI’s instruments for campaigns or lobbying both. Customers are additionally not permitted to make use of OpenAI instruments to discourage voting or misrepresent the voting course of.
The digital credential system would encode photographs with their provenance, successfully making it a lot simpler to determine artificially generated picture with out having to search for bizarre arms or exceptionally swag suits.
OpenAI’s instruments will even start directing voting questions in the USA to CanIVote.org, which tends to be probably the greatest authorities on the web for the place and easy methods to vote within the U.S.
However all these instruments are at the moment solely within the technique of being rolled out, and closely depending on customers reporting unhealthy actors. Provided that AI is itself a quickly altering device that frequently surprises us with great poetry and outright lies it’s not clear how nicely it will work to fight misinformation within the election season. For now your finest wager will proceed to be embracing media literacy. Which means questioning each piece of reports or picture that appears too good to be true and no less than doing a fast Google search in case your ChatGPT one turns up one thing completely wild.