ChatGPT is indeed a major technological breakthrough making serious seismic waves in areas of working with texts. The full consequences of the wide adoption of the technology are yet to be seen however some threats are becoming visible already and we would like to openly discuss our approach to using generative AI in Animalia.
It was pointed out by many people that the big risk of chatGPT-like tool adoption is flooding the internet with a tsunami of unreliable AI-generated content absolutely indistinguishable from human-written, fact-checked articles thus transforming the information environment into the self-reproducing accumulation of misinformation with no reliable source to trace it back. I think everyone of us had the opportunity to experience such content already, like the image of wisent I generated for illustration. It seems like everything is ok but something is definitely off.
Animalia.bio is an online encyclopedia with roughly 30k articles about animals and more supplementary information. In order to build this database we of course had to use automation tools to combine multiple open-source reputable sources into shorter, easy-to-digest, well-structured articles. While using automation we always had a human at the final stage of information processing in order to make sure we provide quality and reliable content and we are going not just stick to this attitude, but extend human involvement into content moderation.
For any site like ours, the problem of error is essential as there is no way to completely avoid wrong information slipping into the article here and there, due to mistakes in source or human error. But the inevitability of mistakes is of course not a reason to try to eliminate them as well as becoming more rigorous with recording the sources of information.
With all that said we want to state that we are not going to change our attitude toward creating articles and we will not use generative AI tools for creating articles published on the site as well as for creating images used to illustrate articles. We will be careful to use reliable sources and reference them, as well as start more wide involvement users in the moderation process so that we can eliminate as many mistakes on the site as possible.
With that said, it doesn't mean that we should reject the new technology where it can deliver result beneficial for society. In the case of the GPT algorithm, it can be efficient in translation or grammar check for example.
I do believe that we are not the only group with such an attitude and in some foreseeable future, such a mode of conduct will become a de-facto standard for reputable websites.