AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans, an idea known as artificial general intelligence that may never be possible. One deep-learning pioneer, Google’s Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.
There’s a particular type of AI making headlines—in some cases, actually writing them too. Generative AI is a catch-all term for AI that can cobble together bits and pieces from the digital world to make something new—well, new-ish—such as art, illustrations, images, complete and functional code, and tranches of text that pass not only the Turing test, but MBA exams.
Tools such as OpenAI’s Chat-GPT text generator and Stable Diffusion’s text-to-image maker manage this by sucking up unbelievable amounts of data, analyzing the patterns using neural networks, and regurgitating it in sensible ways. The natural language system behind Chat-GPT has churned through the entire internet, as well as an untold number of books, letting it answer questions, write content from prompts, and—in the case of CNET—write explanatory articles for websites to match search terms. (To be clear, this article was not written by Chat-GPT, though including text generated by the natural language system is quickly becoming an AI-writing cliche.)
While investors are drooling, writers, visual artists, and other creators are naturally worried: Chatbots are (or at least appear to be) cheap, and humans require a livable income. Why pay an illustrator for an image when you can prompt Dall-E to make something for free?
Content makers aren’t the only ones concerned. Google is quietly ramping up its AI efforts in response to OpenAI’s accomplishments, and the search giant should be worried about what happens to people’s search habits when chatbots can answer questions for us. So long Googling, hello Chat-GPTing?
Challenges loom on the horizon, however. AI models need more and more data to improve, but OpenAI has already used the easy sources; finding new piles of written text to use won’t be easy or free. Legal challenges also loom: OpenAI is training its system on text and images that may be under copyright, perhaps even created by the very same people whose jobs are at risk from this technology. And as more online content is created using AI, it creates a feedback loop in which the online data-training models won’t be created by humans, but by machines.
Data aside, there’s a fundamental problem with such language models: They spit out text that reads well enough but is not necessarily accurate. As smart as these models are, they don’t know what they’re saying or have any concept of truth—that’s easily forgotten amid the mad rush to make use of such tools for new businesses or to create content. Words aren’t just supposed to sound good, they’re meant to convey meaning too.
There are as many critics of AI as there are cheerleaders—which is good news, given the hype surrounding this set of technologies. Criticism of AI touches on issues as disparate as sustainability, ethics, bias, disinformation, and even copyright, with some arguing the technology is not as capable as most believe and others predicting it’ll be the end of humanity as we know it. It’s a lot to consider.
To start, deep learning inherently requires huge swathes of data, and though innovations in chips mean we can do that faster and more efficiently than ever, there’s no question that AI research churns through energy. A startup estimated that in teaching one system to solve a Rubik’s Cube using a robotic hand OpenAI consumed 2.8 gigawatt-hours of electricity—as much as three nuclear plants could output in an hour. Other estimates suggest training an AI model emits as much carbon dioxide as five American cars being manufactured and driven for their average lifespan.
There are techniques to reduce the impact: Researchers are developing more efficient training techniques, models can be chopped up so only necessary sections are run, and data centers and labs are shifting to cleaner energy. AI also has a role to play in improving efficiencies in other industries and otherwise helping address the climate crisis. But boosting the accuracy of AI generally means having more complicated models sift through more data—OpenAI’s GPT2 model reportedly had 1.5 billion weights to assess data, while GPT3 had 175 billion—suggesting AI’s sustainability could get worse before it improves.
Vacuuming up all the data needed to build these models creates additional challenges, beyond the shrinking availability of fresh data mentioned above. Bias remains a core problem: Data sets reflect the world around us, and that means models absorb our racism, sexism, and other cultural assumptions. This causes a host of serious problems: AI trained to spot skin cancer performs better on white skin; software designed to predict recidivism inherently rates Black people as more likely to reoffend; and flawed AI facial recognition software has already incorrectly identified Black men, leading to their arrests. And sometimes the AI simply doesn’t work: One violent crime prediction tool for police was wildly inaccurate because of an apparent coding error.
Again, mitigations are possible. More inclusive data sets could help tackle bias at the source, while forcing tech companies to explain algorithmic decision-making could add a layer of accountability. Diversifying the industry beyond white men wouldn’t hurt, either. But the most serious challenges may require regulating—and perhaps banning—the use of AI decision-making in situations with the most risk of serious damage to people.
Those are a few examples of unwanted outcomes. But people are also already using AI for nefarious ends, such as to create deepfakes and spread disinformation. While AI-edited or AI-generated videos and images have intriguing use cases—such as filling in for voice actors after they leave a show or pass away—generative AI has also been used to make deepfake porn, adding famous faces to adult actors, or used to defame everyday individuals. And AI has been used to flood the web with disinformation, though fact-checkers have turned to the technology to fight back.
As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Meta have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or Black people. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on the safety and ethics of AI.
But the hype around generative models suggests we still haven’t learned our lesson when it comes to AI. We need to calm down; understand how it works and when it doesn’t; and then roll out this tool in a careful, considered manner, mitigating concerns as they’re raised. AI has real potential to better—and even extend—our lives, but to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.
Tom Simonite is a former senior editor who edited WIRED’s business coverage. He previously covered artificial intelligence and once trained an artificial neural network to generate seascapes. Simonite was previously San Francisco bureau chief at MIT Technology Review, and wrote and edited technology coverage at New Scientist magazine in London. ... Read more
WIRED is where tomorrow is realized. It is the essential source of information and ideas that make sense of a world in constant transformation. The WIRED conversation illuminates how technology is changing every aspect of our lives—from culture to business, science to design. The breakthroughs and innovations that we uncover lead to new ways of thinking, new connections, and new industries.
© 2025 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
Generative AI has had a great impact on the creation and use of content in various forms, such as text, music, and art. However, using this technology also involves copyright issues, raising potential legal uncertainty. Developments in AI-driven tools are happening faster than the law can keep pace. So many aspects are still unclear. For example, it could be argued that using content to make datasets in an educational setting can often be seen as "fair use" in US copyright law or fair dealing in Hong Kong. Publishers and copyright owners though have the right to challenge the use and seek compensation for intellectual property violation through the courts. If you use AI-generated content without checking if the generated content is based on copyrighted works, there is a chance of copyright infringement. Further AI tools have the potential to infringe copyright in existing works by generating outputs that closely resemble them.
DeepseekGiven the uncertainty surrounding copyright and AI, as well as the need for clarification on other topics related to the use of AI tools, it is crucial to be aware of the potential risks and take measures to protect ourselves and our works. Here are some recommended guidelines and best practices for utilizing AI in academic and scholarly fields.
Led by The Chinese University of Hong Kong (CUHK) and five other partnering higher education institutions in Hong Kong, this project proposes a collaborative community of seasoned educators and technical experts to provide practitioners with the necessary support and resources to leverage AI tools for innovative pedagogies.
“The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. It also raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.” (UNESCO, 2023)
AI's impact on the creative landscape and copyright laws is a global concern. Cases in South Korea, the United States, and China mentioned in the article highlight the evolving legal landscape and its implications for copyright protection. (Copyright Agent, January 2024)
GenAI has made a significant impact on higher education. Ithaka S+R has been cataloging GenAI applications specifically useful for teaching, learning and research in the higher education context. The content will be continuously updated to reflect the latest developments. (ITHAKA S+R)
Background: The recent development of Large Language Models (LLMs) and Generative AI (GenAI) presents new challenges and opportunities in scholarly communication. This has resulted in diverse policies of journals, publishers, and funders around the use of AI tools. Research studies, including surveys, suggest that researchers are already using AIs at a significant scale to create or edit manuscripts and peer review reports. Yet AI accuracy, effectiveness, and reproducibility remain uncertain. This toolkit aims to promote responsible and transparent use of AI by editors, authors, and reviewers, with links to examples of current policies and practices. As AIs are fast evolving, we will monitor and update these recommendations as new information becomes available. Please contact us and share any opinions, policies, and examples that could help us improve this guide.
We strongly recommend that editors, journals, and publishers develop policies related to the use of AI in their publishing practices and that they publish those policies on the journal’s website. For instance, policies for authors should be listed in the journal’s ‘Instructions to Authors’ or ‘Submission Guidelines’, while policies for reviewers should be in the journals’ ‘Reviewer Guidelines’. The policies should be clearly communicated to authors and reviewers through email communication and in the online submission system. The policies should also include information on how parties should raise concerns about possible policy infringement, consequences that any party might face in case of infringement, and possible means to appeal to journal decisions regarding these policies. Additionally, the policies should be supplemented with educational resources or links to information on the responsible use of AI (see for example Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum). Editors should also consider announcing the release or update of AI policies through published editorials.
We acknowledge that there may be disciplinary or operational difficulties (e.g., submission system limitations) that could affect the development and implementation of AI policies. However, clear checks and declaration forms can be created to collect information on AI use and any potential conflicts of interest associated with their use (e.g., disclosing if the AI used was developed by the publisher).
We strongly recommend that all information regarding AI use for a particular manuscript are declared in appropriate manuscript sections, a separate AI declaration section (see an example here), or through the use of publication facts labels (see our detailed recommendations below). Finally, we strongly recommend monitoring adherence to the journal’s AI policy and providing regular reports on that adherence and any policy infringements.
Authorship/Contributionship: We strongly recommend that AIs should not be listed as co-authors on publications. Editors should consider the World Association of Medical Editors (WAME), the International Committee of Medical Journal Editors (ICMJE), or the STM association (STM) materials for guidance on this topic. For instance, the ICMJE states that AIs cannot be authors: “because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship”.
Citations and literature review: We strongly recommend that AI outputs should not be cited as primary sources for backing up specific claims. Research has shown that citations and information provided by AIs can be inaccurate or fabricated. Authors and reviewers should be reminded to always read and verify the information they cite, as they are the responsible party for the information they present. Additionally, AI outputs may not be reproducible at a later time, so editors should consider whether authors need to capture and time stamp any outputs they mention or cite (see an example of authors showcasing responses of chatGPT for a specific topic).
Data collection, cleaning, analysis, and interpretation: We strongly recommend that any use of AI for data collection, cleaning, analysis, or interpretation be disclosed in the Methods section of the manuscript or in an AI declaration section (see an example here). These statements should ideally be accompanied by appropriate robustness and reliability indicators, as well as steps to ensure their reproducibility.
Data or code generation: We strongly recommend that any use of AI for data or code generation be disclosed in the Methods sections of the manuscript, as well as data and code declaration sections that some journals have, or in an AI declaration section (see an example here). Editors should be aware that generated data or code can be an excellent resource for educational purposes, but could also be misused for creation of fake data for hypothesis testing or other analyses. Furthermore, it might be difficult to distinguish between author(s) generating (part of) the code or data using an AI and then editing the AI output themselves, vs creating the code or collecting the data themselves and then using an AI for editing.
Visualisation – creation of tables, figures, images, videos, or other outputs: We strongly recommend that any use of AI for visualisation is disclosed in the Method section of the manuscript and in the captions or legends of those outputs. AI generated visualisations may require additional checks to insure their validity, as well as steps to ensure their reproducibility. Editors should consider an example of a policy banning the use of AI for these purposes, and an example of a retraction of a paper due to a “nonsensical” image).
Writing, language, and style editing: We strongly recommend specifying whether and how authors should declare their use of AI for writing, language, or style editing. Editors can consider recommending that authors declare such use in the Acknowledgements section, or in an AI declaration section (see an example here). Alternatively, editors could specify that such use, similar to use of spell checking software, does not need to be declared. Editors should be aware that it might be difficult to distinguish between an author generating a (part of) text using an AI and then editing it, vs writing the initial text draft and then editing it using an AI. For a helpful overview of publishers’ policies related to this issue see this Scholarly Kitchen post (compiled in spring 2024).
Other research uses: We strongly recommend disclosing any AI use, even those not covered in the above sections. Such use should be disclosed in an appropriate section (e.g., Acknowledgments, Method section, or an AI declaration section). For example, authors might choose to employ self-check AI tools for checking research reporting recommendations, image, code, or data integrity.
0 تعليقات