OpenAI expresses concerns over the potential remarks made by its chatbot regarding individuals’ facial features

OpenAI expresses concerns over the potential remarks made by its chatbot regarding individuals' facial features

OpenAI has developed an advanced version of its ChatGPT, which can analyze images and is currently assisting the blind with visual information. However, due to concerns regarding facial recognition capabilities, the public does not yet have access to this feature.

ChatGPT, known for its assistance in tasks like writing papers and code, now possesses the ability to analyze images, describe their contents, and even recognize specific individuals. The aim is to eventually enable users to upload images for assistance, such as identifying car engine problems or suggesting solutions for various issues.

One of the fortunate individuals to test the advanced version is Jonathan Mosen, a blind employment agency CEO. He found the visual analysis immensely helpful during his travels, allowing him to identify items in a hotel bathroom. The technology described each item in detail, ensuring he received the necessary information effectively.

The inclusion of visual analysis in the latest GPT-4 model, announced by OpenAI in March, marked its “multimodal” capability, responding to both text and image prompts. However, while most users can only interact with the bot using text, Jonathan Mosen was granted early access to the visual analysis feature through Be My Eyes, a start-up that connects blind users with sighted volunteers. This collaboration was aimed at testing the “sight” feature before its wider release.

Recently, OpenAI made changes to the app, obscuring people’s faces for privacy reasons. This adjustment reflects OpenAI’s reluctance to make the facial recognition feature publicly available, as it could raise ethical and legal concerns, especially regarding biometric data and privacy laws.

OpenAI is also concerned about potential misjudgments and biases when it comes to analyzing people’s faces. They are actively working on addressing safety concerns and seeking democratic input to establish rules for their AI systems.

The visual analysis technology was not entirely unforeseen, as the model was trained on a mix of images and text from the internet. OpenAI acknowledges that there are already celebrity facial recognition tools, and they are exploring approaches like Google’s opt-out feature for well-known figures who wish to remain unidentified.

However, the visual analysis tool is not without its flaws. Users have encountered instances where it misidentified or provided incorrect information about certain images, much like the well-known text prompt hallucinations.

Both OpenAI and Microsoft, the latter of which has invested in OpenAI, are mindful of the technology’s power and the potential privacy implications. They are committed to responsible and safe deployment of AI technologies and are actively working to address these concerns.