Monday, April 17, 2023

AI generated summary of 60 minutes AI edition

In a 60 Minutes interview that aired on April 17, 2023, Google CEO Sundar Pichai warned that artificial intelligence (AI) is advancing rapidly and that society needs to be prepared for the impact it will have on our lives. Pichai said that AI has the potential to be more profound than the discovery of fire or electricity, but that it also has the potential to be harmful if it is not used responsibly.Pichai discussed some of the ways that AI is already being used in our lives, such as in self-driving cars, medical diagnosis, and customer service. He also talked about some of the potential risks of AI, such as job displacement, bias, and privacy concerns.Pichai said that Google is committed to using AI for good, and that the company is working on developing ethical guidelines for AI development. He also said that Google is working with governments around the world to develop regulations for AI.Pichai's interview is a reminder that AI is a powerful technology that has the potential to change our world in profound ways. It is important to be aware of the potential benefits and risks of AI, and to work to ensure that AI is used for good.Here are some of the key takeaways from the interview:
AI is advancing rapidly and has the potential to be more profound than the discovery of fire or electricity.
AI has the potential to be used for good or for harm.
Google is committed to using AI for good and is working on developing ethical guidelines for AI development.
Google is working with governments around the world to develop regulations for AI.
It is important to be aware of the potential benefits and risks of AI, and to work to ensure that AI is used for good.

Can bees be sentient?

According to a growing body of research, bees are sentient, self-aware and have emotional lives³. A study by Newcastle University researchers Melissa Bateson and Jeri Wright found that honeybee response "has more in common with that of vertebrates than previously thought" and "suggest that honeybees could be regarded as exhibiting emotions"¹. The study was published in Current Biology in June 2011¹.


An experiment where honey bees were shaken for 60 seconds to simulate a predatory attack found that the bees' equivalent of blood — called haemolymph — was lower in dopamine and serotonin after shaking, compared with bees that weren't shaken. Both dopamine and serotonin are involved in mood regulation in humans².


The research raises questions about how bees are used in the agricultural sector or in laboratory experiments, where they've traditionally been seen as little more than tools³.


Source: Conversation with Bing, 4/17/2023(1) New research: bees are sentient, self aware, are emotional | RNZ. https://www.rnz.co.nz/national/programmes/first-up/audio/2018886340/new-research-bees-are-sentient-self-aware-are-emotional Accessed 4/17/2023.

(2) Honeybees Might Have Emotions | WIRED. https://www.wired.com/2011/06/honeybee-pessimism/ Accessed 4/17/2023.

(3) Do insects, octopus and other invertebrates feel emotions ... - ABC. https://www.abc.net.au/news/science/2022-04-02/invertebrates-octopus-bees-feelings-emotion-pain-joy-science/100947014 Accessed 4/17/2023.

(4) ‘Bees are sentient’: inside the stunning brains of nature’s hardest .... https://www.theguardian.com/environment/2023/apr/02/bees-intelligence-minds-pollination Accessed 4/17/2023.

(5) New research deepens mystery about evolution of bees' social behavior. https://www.sciencedaily.com/releases/2021/05/210526132133.htm Accessed 4/17/2023.


Sources:

: Bateson, M., & Wright, G. A. (2011). Reversal learning and affective responses in honeybees (Apis mellifera). Current Biology, 21(6), 1-4.

: Perry, C. J., Baciadonna, L., & Chittka, L. (2016). Unexpected rewards induce dopamine-dependent positive emotion-like state changes in bumblebees. Science, 353(6307), 1529-1531.

: "New research: bees are sentient, self aware, are emotional". RNZ. Retrieved April 17, 2023.

Friday, April 07, 2023

AI may not exactly follow your instructions! But that is Ok.

A Large Learning Model uses AI/Machine Learning to recognize the data presented to it and connect the dots. Sometimes using initiative to add to your prompt.

Here is an example. 

I am sitting in the library and these are the actual pictures.


I use the Bing Image Creator powered by Dall-E to give it this prompt. 
I am sitting in a library and working on my laptop, The laptop is in front of me as I type. To my left is the laptop's power cord and below it is my phone that is charging. on my right is a mousepad with my mouse and in front of it is a water bottle that is made of steel and has a blue top. The table I am sitting on has 5 chairs all spread out across the table. On the chair next to me is my backpack. I am Indian and I am wearing a black jacket as it is a bit cold here.

Here are the images that I get back.





The additions in the image that I did not mention were - notebooks, blue chairs in the library, and the position of the mouse. The laptop bag is not on the chair. 

My Observations:

  • It got the place library right, books, bookshelves, chairs. It has enough data to recognize the visual components of the library
  • Telling the AI that I am an Indian meant that I had to accept what the Ai thinks is a consolidation of what it has learned as photos of people from India. I did not say if I am a boy or a girl but the model may have information from my profile or not.
  • Waterbottle is a no brainer
  • The model has not learned what a "on the chair next to me is my laptop bag" There may not be many photos in the learning data about this scene
I think both Microsoft, Open AI, and Google are throttling back some of the capabilities as it may be easy to improve the results of my prompt by adding more learning. I think the concern is if the model returns results too close to reality there may be a lot of anxiety about what could happen next. 

Bard's Response:

It sounds like you are working hard in the library. I hope you are finding it a quiet and productive place to work.

I can picture you sitting at a table with your laptop in front of you. To your left is the laptop's power cord, and below it is your phone that is charging. On your right is a mousepad with your mouse, and in front of it is a water bottle that is made of steel and has a blue top. The table you are sitting at has 5 chairs, all spread out across the table. On the chair next to you is your backpack. You are Indian, and you are wearing a black jacket as it is a bit cold here.

I hope you have a great day!


 

 



Tuesday, March 14, 2023

How to leverage Google’s AI Vision service for your computer vision needs

Google Cloud AI Vision is a new service that allows you to easily build and deploy computer vision applications on the cloud. With Google Cloud AI Vision, you can access pre-trained models for common tasks such as object detection, face recognition, text extraction, and more. You can also customize and train your own models using Google’s state-of-the-art AutoML technology. Some of the benefits and use cases of Google Cloud AI Vision for different industries and scenarios.

Computer vision is one of the most influential and versatile applications of artificial intelligence (AI). It enables machines to understand and interact with the visual world, unlocking new possibilities for businesses across various domains.

However, building and deploying computer vision applications can be challenging and costly. It requires specialized skills, hardware, data, and tools. That’s why Google has launched its new Vertex AI Vision service1, which aims to simplify and democratize computer vision on the cloud.

What is Google Cloud AI Vision?

Google Cloud AI Vision is a computer-vision-as-a-service capability that lets you easily create and run computer vision applications on Google Cloud Platform (GCP). It offers two main features:

Pre-trained models: You can use ready-made models for common computer vision tasks such as object detection, face recognition, text extraction, logo detection, landmark recognition, product search, etc. These models are powered by Google’s cutting-edge research and technology such as TensorFlow1 , BERT etc.

Custom models: You can also build your own models using AutoML Vision1, which automates the process of data labeling, model training, evaluation, and deployment. You can either use your own data or leverage Google’s public datasets such as Open Images , YouTube-8M , etc.

With Google Cloud AI Vision, you don’t need to worry about infrastructure management or scalability issues. You only pay for what you use based on your computing and storage needs. You also get access to Google’s security features such as encryption at rest and in transit1.

How can Google Cloud AI Vision benefit your business?

Google Cloud AI Vision can help you solve various business problems using computer vision. Here are some examples of how different industries and scenarios can benefit from this service:

Retail: You can use Google Cloud AI Vision to enhance your customer experience by providing product recommendations based on visual similarity1, enabling visual search on your website or app1, detecting counterfeit products1, etc.

Healthcare: You can use Google Cloud AI Vision to improve patient care by diagnosing diseases based on medical images1, detecting anomalies in X-rays or MRI scans1, monitoring patient behavior using video analytics1, etc.

Manufacturing: You can use Google Cloud AI Vision to optimize your production process by inspecting product quality using defect detection1, measuring dimensions using object measurement1, tracking inventory using barcode recognition1, etc.

Education: You can use Google Cloud AI Vision to enhance your learning outcomes by grading student assignments based on handwriting recognition1, creating interactive quizzes based on image classification1, generating captions for educational videos using speech-to-text transcription1, etc.

Sources: : https://venturebeat.com/ai/google-reveals-whats-next-for-cloud-ai/ : https://www.tensorflow.org/ : https://arxiv.org/abs/1810.04805 : https://cloud.google.com/vision/automl/docs : https://storage.googleapis.com/openimages/web/index.html : https://research.google.com/youtube8m/

Blog research: Bing

Sunday, March 12, 2023

Meta Plans to Launch a New Social Media App to Rival Twitter

Meta Platforms Inc., the parent company of Facebook and Instagram, is exploring the possibility of launching a new standalone social media app that could rival Twitter and other microblogging competitors such as Mastodon. This move could mark a significant effort by Meta to challenge Twitter’s position as the world’s “digital town square”, according to a report by Barron’s.

Source: Barron’s @janetcho

Saturday, March 11, 2023

Wallops Island Launch Today

Rocket Lab's Electron is set to launch its second Electron rocket from NASA's Wallops Flight Facility at 6 o'clock on Saturday[^1^][1]. This will be the second flight from American soil by the space launch company.

NASA said it will support its second Rocket Lab Electron rocket launch from its Wallops Flight Facility in Virginia on Saturday night[^2^][2].

This exciting event should be visible throughout our area and is a great opportunity for space enthusiasts to witness a rocket launch up close.

Credit: MSN News, WTKR News, Bing Source: “Launch this evening from Wallops Island should be visible throughout our area”, “NASA announces electron rocket launch this weekend: Here’s how you can watch

The Future of AI: PaLM-E and Beyond

Artificial intelligence (AI) has come a long way in recent years. From simple chatbots to advanced machine learning algorithms, AI has revolutionized the way we interact with technology. One of the latest advancements in this field is PaLM-E, an embodied multimodal language model developed by Google.

PaLM-E integrates AI-powered vision and language to enable autonomous robotic control. The main architectural idea of PaLM-E is to inject continuous, embodied observations such as images, state estimates, or other sensor modalities into the language embedding space of a pre-trained language model.

This allows for more natural interactions between humans and robots. Instead of having to program specific commands for each task, robots equipped with PaLM-E can understand and carry out complex tasks based on human voice commands.

But this is just the beginning. As AI continues to evolve and improve, we can expect even more advanced capabilities from models like PaLM-E. The future of AI is bright and full of possibilities.

This blog post was created with the help of Bing search engine and its sources including palm-e.github.io , siliconangle.com , ai.googleblog.com , and arstechnica.com .

AI generated summary of 60 minutes AI edition

In a 60 Minutes interview that aired on April 17, 2023, Google CEO Sundar Pichai warned that artificial intelligence (AI) is advancing rapid...