Social Media to Develop New Child Monitoring and Guidance Algorithm but, it’s Not Cheap

Featured Photo: Misguided baby subject to harmful social media content

Social media founders are no strangers to the public court of opinion. In 2018 Mark Zuckerberg was pulled into a congressional inquiry after being charged with allowing a third party app to harvest and sell the collected data of multiple millions of Facebook users. CEO Shou Zi Chew as well faced a congressional inquiry on the intentions and anticipated actions of the new Tik Tok app after investigators had found malicious algorithm patterns that assisted a number of illegal actions via the app as well as algorithms potentially targeting or placing children in dangerous streams of information and misinformation.

Dangerous challenge trends, the sale of illegal drugs, teen suicide, the contribution of body image issues and teen pressure, all a part of a long list of problems stemming from the impact of social media on teens.

After identifying the root problem these founders and programmers have developed a new approach to the household use of popular social media platforms.

“The problem is, parents are literally leaving their children to their own devices. What we heard at the congressional hearings; parents are not speaking with their children consistently about the impact of what they watch on streaming services.” A source, who wishes to remain anonymous, explains. “Whether it be because there is a language barrier, or the parent is too busy, ‘they don’t have the time’, or the child has access to the app without parent permission. This is all common among American households. There are too many heartbreaking stories of broken homes due to suicide or drug abuse that should have obviously been Social Media’s responsibility to avoid. It’s our job, as we procure and deliver universal data to know exactly what is going to impact everyone individually. Although collecting that data, and the way it is used, can be a very delicate subject, especially for Tik Tok.”

The source goes on to say, “American parents are taking a more… hands off approach to their children. This of course makes the home devoid the responsibility of mental health check ups and investigating the child’s life and welfare. These important faculties of the home are detrimental in knowing what is okay to place into a child’s feed and what is not okay. So, since we are not in these homes we are building a service that analyzes all the children’s data, from google searches, to text messages, we then flag the possible risk of the child’s actions and created an AI babysitter to lecture the child for fifteen minutes via the platform on the repercussions of what they are doing or what they have searched. There is an added video surveillance program that will virtually watch the child and identify things like drug use, premature sexual activity and anything else that our program flags as potentially harmful.”

A surveillance program of this size may be deemed as a violation of privacy, but, the source tells us that all of the information remains solely the child’s intellectual property and only in particular instances will it be shared.

“We will initially send a message to the parent at any time the child has been flagged in a potentially dangerous situation, and, if it is severe, the authorities will be contacted. This is a win win for everyone. Parents can rest easy knowing they don’t have to have these delicate conversations with their children that are vital to a child’s successful development.”

The package monthly subscriptions are expected to be priced in the mid 100’s of dollars. Much less than the average body guard or babysitter.

Not an actual News Media post/For entertainment purposes only