In continuing to explore AI, I’ve had a few conversations about cheating, plagiarism, the rights of content creators, and intellectual property rights.
The US Patent office does not recognize the patentability or copyrightability of material that is not produced by people. One company did try to patent a product (I believe it involved DNA sequencing) and was rejected by the patent office for that reason. This came up in a discussion on how the largest employers are likely looking at ways to reduce payroll through using more AI to do tasks previously only performed by people, such as screenwriting for TV, video, and games; copywriting in marketing, news media, and social media; music composition and performance; and commercial art production in news, entertainment, and social media. If companies cannot copyright or patent work produced by AI, they have no barriers to protect their work, which would mean some advantage to keep “creatives” employed.
Essentially, AI looks at all information it can find (mostly through the Internet), correlates it, and then mashes together some answer to a prompt. That prompt might be, create an image of a field of flowers, or write a Sonnet about flowers, describe how plants produce flowers, or write a concerto evoking someone admiring a field of flowers. If, in doing so, it drew on creations by people, should they be credited? Should we have a way to check its sources?
I’ve asked a number of friends who (also) work in tech, and most of the answers I’ve received is that it would be next to impossible. That anything produced by AI draws on hundreds, or thousands, or hundreds of thousands of sources, and they’ve all been mashed together, and one could never be able to trace back what came from where.
But I decided to ask the engines themselves, what would be the ramifications if AI had to document its sources. Here were the answers:
Bard |
Chat GPT 4 |
If AI had to document the sources of its points, it would likely become more cautious in its statements and more careful in its research. AI would also need to develop better methods for tracking and storing information, so that it could easily provide references to its sources. This would likely lead to a more thoughtful and nuanced AI, one that is less likely to make sweeping generalizations or to repeat inaccurate information. It would also make AI more transparent, as users would be able to see where AI is getting its information from. Of course, there are also some potential downsides to requiring AI to document its sources. For example, it could make AI more time-consuming and expensive to develop. Additionally, it could make AI less creative, as AI might be less likely to take risks or to come up with new ideas. Overall, whether or not requiring AI to document its sources would be a good thing is a matter of opinion. There are both potential benefits and drawbacks to consider. |
If AI had to document its sources, several aspects of AI development, performance, and impact would change. Some of the notable changes include: Transparency and trust: Documenting sources would make AI systems more transparent and accountable, leading to increased trust among users. It would enable users to understand where the information is coming from, and assess its credibility. Verifiability: Users could verify the information provided by the AI, cross-checking it with the cited sources to ensure the AI's output is accurate and unbiased. Ethical considerations: Documenting sources would emphasize the importance of ethical AI development, as it would become necessary for AI to avoid using unverified or biased sources. Increased complexity: AI developers would need to design systems capable of tracking and citing sources, which could increase the complexity of the algorithms and data structures. Performance trade-offs: Documenting sources may lead to trade-offs in response time or computational resources, as AI systems would need to manage additional data and processing. Legal and copyright concerns: AI systems would need to navigate legal and copyright issues surrounding the use of cited sources, which might require developers to build more robust compliance mechanisms. Education and research: AI-generated content with documented sources could prove to be valuable for educational and research purposes, as it would provide users with a solid foundation for further exploration. Shift in focus: AI developers might need to prioritize the quality and reliability of the sources used in their systems, which could potentially impact the way AI is trained and evaluated. In summary, requiring AI systems to document their sources could lead to increased transparency, trust, and ethical considerations. However, it would also introduce new complexities and potential performance trade-offs that developers would need to address. |
What are your thoughts?