Tag: security

  • We Bought Four Amazon Echo Dots!

    We Bought Four Amazon Echo Dots!

    Well, it is Prime Day and as usual, there are some deeply discounted items available on Amazon. My family usually looks but doesn’t buy on Prime Day, hoping to be able to predict the discounts we may see on Cyber Monday or Black Friday in a few weeks. We especially avoid any smart speaker or digital assistant hardware since we have always had (well informed) privacy issues and concerns. This year it has been different. We caved and bought Amazon Echo Dots for the whole family. Here’s why.

    They’ll Be Perfect for Our New Home

    Our forever family home is being built and we are planning a move-in just a few months from now. We are going to have more space for the six of us than we have ever had, especially in the kids’ rooms, the master suite, and the kitchen/dining great room. We’ll be a bit more spread out than we’ve ever been and the Echo has some great options for communicating throughout your home without having to scream up the stairs or down the hallway. The intercom feature was a deal sealer for both my wife and myself. The kids are pretty excited too.

    Digital Homeschool Help

    More of us are homeschooling than ever now and with four kids, all doing school work nearly every day, we need help sometimes. YouTube can be great to present some complicated concepts in helpful ways (7th-grade math, anyone) but my kids looking at screens and using a Google Search for spelling or calculator solutions isn’t the safest proposition. Alexa (the Virtual Assistant on Amazon Echo) will answer your spelling, language arts, science, and math questions with no risky search results or screen use at all. It is more important for my kids to know how to get information than it is that they know the info when they pass a grade. Alexa and other Virtual Assistants are the new waves of information access and they aren’t going away. They’re only getting smarter and faster.

    Less Screen Time

    My kids, like all kids, love to sit around and look at a phone or tablet. We are constantly having to get on to them about their obsessive behavior. We try to set better examples, we don’t always succeed, but giving them alternatives is very helpful. The Echo Dot is a smart speaker without a screen. At night, when the kids want to listen to a podcast or music for bedtime they can ask Alexa to play it for them instead of having their screens in their faces right up to when they fall asleep. Studies have shown this isn’t good for their sleep and can actually very detrimental to their development. With parental controls on the subscription services we use and on Alexa itself, we can ensure that our kids aren’t looking at their screens and are only listening to music and podcasts we’ve approved of.

    Safety and Security Upgrades

    All of this is great but digital safety and data security are always an issue. Especially with artificial intelligence that is designed to learn about you in order to be more useful to you. There is an obvious trade-off. You’re giving it information in exchange for convenience. I believe most of us consider that an acceptable exchange, considering Alexa and Google Home have been some of the fastest tech product to be integrated into people’s homes. The truth is that we have been making this exchange for a long time without really thinking about it. Every post on Facebook, Twitter, or Instagram, every search on Google, and every purchase or browsing session on Amazon has been used to build a database of advertising information about you. This can be scary to many but in all honesty, that ship has sailed and you raised the sails for it to do so.

    When you use these sites, you allow them access to your information. Alexa is no different and my family has considered the risks and decided it’s worth it. First of all, we already get targeted ads because we do so much of our shopping on Amazon and searching on Google. Secondly, the latest models of Amazon Echo Dot have added features like a hardware button to turn off the microphone that makes us feel like we can avoid being listened to when we don’t want to be listened to.

    Risk/Reward

    When you narrow it down it is a consideration of opportunity cost. You have an opportunity for convenience but it will cost some of your info. At a $19.99 price point, the Echo Dot is a great deal right now on Prime Day so we bought four of them. They’ll be here in a couple of days and I’ll set one up and let you know how it all goes. Stay tuned for my (late but in-depth) review of the Amazon Echo Dot as a tool for controlling kids’ screen time.

    If you shop the Amazon Prime Day today, consider using http://smile.amazon.com and signing up to support our non-profit, Four Point Families. You’ll have to search for Four Point Families and select it as the organization you’d like to partner with. Then Amazon will send .5% of your purchase our way to help us continue to protect families. Thanks.

     

  • CES2020 | Protecting Your Family’s Privacy and Data

    CES2020 | Protecting Your Family’s Privacy and Data

    Walking the show floor at CES can be a major assault on the senses. Every booth has music and lights and giant screens or projectors showcasing the latest and best of their technology offerings. One thing that is cutting through all of the noise, however, is the need for tech companies to earn the trust of their target consumers. Voice control, smart home technology, and data mining for convenience in retail are on the rise. The companies who use our information to make our lives easier have to convince us that they are going to stop there. The good news is that it seems they are understanding this truth.

    Protecting Your Family’s Privacy and Data

    Robin Raskin, the founder of Living in Digital Times, said that trust will be a major theme at CES this year. I spent several days on the CES show floors and I can tell you this is the truth. Car manufacturers are explaining how their tech is built to keep you safe. They are saying that the information gathered about you is meant to make the goal of safety more attainable. Convenience is being showcased at nearly every booth on the floor as well. Convenience requires data, so it is no surprise to see these exhibitors featuring their privacy policies upfront for all to see. Even toy makers are touting their focus on privacy. The connected toys your children will play with shouldn’t be tracking their every move.

    Trust and Responsibility

    Protecting your family’s privacy and data is a huge responsibility. The responsibility for data privacy, security, and trust are shared, though. We, as consumers need to know the role we play in protecting our information. There are key factors we must keep in mind when thinking about security. Our passwords are very important. We must make sure we have different passwords across our online accounts. Also, be careful to use passwords that aren’t too easy to guess. Finally, we have to remember to set the security settings on our new smart devices when we take them out of the box. That new thermostat or camera is connected to the internet. That makes it susceptible to hacking and therefore security settings must be set. If your device offers “two-factor” authentication then set that up as well. It can seem inconvenient but it will protect you from a lot of trouble in the future.

    Companies can only do so much to protect your data and security. They can give you tools to protect yourself but they can’t force you to use them. Check out my other articles on data privacy HERE.

    One last important tip is that you only buy smart home devices from well-known, trusted companies. Most of the time, these larger tech companies have had multiple levels of scrutiny concerning their privacy policies. Some smaller developers from other countries will have had less accountability for what they do with your information. Their products cost less and seem to work in the same way but you aren’t guaranteed the safety settings some of the larger companies will give you. All of these products are a privacy risk but you’re likely to have more transparency from a larger, more established company.

    A Caveat.

    Some privacy/security startups are making big waves right now. They are smaller companies that have security and privacy in “front-of-mind” as they develop their technology. My advice is to ask questions and look for more info on their privacy policies. Some of this new security tech is very cool and will be very helpful. Others are taking advantage of the new focus on privacy to sell more stuff that doesn’t work. Be a wise consumer. That is the most critical step in protecting your privacy.

  • Youtube’s New Kids Content Policies Explained

    Youtube’s New Kids Content Policies Explained


    Starting today, all creators are required to mark their content as made for kids or not made for kids in YouTube Studio. -YouTube Creators Email

    YouTube will be limiting the data they collect form videos that are targeting children. This is in effort to comply with the FTC’s demands that they be responsible for the information they gather on their site which lists children among their most frequent audience members. Wording in the email suggests that YouTube is “helping” creators comply with COPPA as well as meeting the demands the Federal Trade Commission put on YouTube as a media company.

    YouTube will use an algorithm to monitor content for child centric content and flag it as such if it is not flagged by the creator of the video. The email reminds creators to be vigilant to properly tag their videos if they are made for children as failure to comply could cause them to be in violation of the FTC’s demands.

    The FTC has outlined what constitutes children’s content and YouTube has that information available on their support page. YouTube’s announcement briefly defines children’s content as:

    • It is directed to children as the primary audience (e.g. videos for preschoolers).
    • It is directed to children but children are a secondary audience (e.g. cartoon video that primarily targets teenagers but is also intended for younger kids).

    YouTube’s guidelines state that they may override content creator’s settings if their content seems to be geared toward kids but isn’t marked as such. This could result in content creators being demonetized or held accountable in some other way for not properly categorizing their content.

    What Parents Should Know

    The FTC fined YouTube for their inability to comply with COPPA and told them they had to have a plan by next year to keep children’s data private on their site. Many thought YouTube Kids was the solution but so few parents actually used the kid version of YouTube so children remain a major audience for YouTube’s main site and app. The information creators give YouTube about their videos and channels will help YouTube know what videos to collect data from that will be used for advertising in the future. Also, the advertising on videos marked as “for children” will be different, focusing on the content of the video as an indicator of the audience rather than viewing data from the viewers themselves.

    These changes, in my opinion, are a step in the right direction for YouTube. Their collection of data from young audiences have been a point of contention for tech safety experts, security and privacy agencies, and family advocacy groups for several years now. The policies handed down by the FTC are in direct response of some of these experts and agencies asking for an investigation into YouTube for their lack of compliance with COPPA.

    As parents we rarely think about our kids digital footprint being collected and used against them but it is happing every time they log on to an app or game. It is important, however, to remember that the trail they leave behind online will follow them for the rest of their lives. The things they buy, the sites they visit, the videos they watch, and the games they play are all being compiled to create a profile on them that will be used to market to them online for years to come. If parents remember that our children’s web traffic is being collected we can take steps to protect them from excessive data collection. Encourage them to use messenger apps that are made just for kids. [Facebook Messenger Kids, not WhatsApp or FB Messenger.] Remind them that what they share online becomes public the moment they share it. Tell them they should only use video and game apps that are intended for children and made by major developers who are more likely to comply with COPPA. Parents are responsible for the safety of their children, as well as their privacy and security so take the steps you can to keep their data private.

  • Is FaceApp Sending all of Your Private Data to Russia?

    Is FaceApp Sending all of Your Private Data to Russia?


    Last week everyone was posting pictures of themselves looking older or younger. They were all using FaceApp, an Android and iPhone app that uses AI to change your face to make you look older or younger, change your gender, and all kinds of different things. Then, suddenly everyone who had been posting pictures of themselves began sharing articles about the privacy dangers of FaceApp. What is true? What does FaceApp do with your pictures? Should we use apps like this? Here are the answers I found.

    Your Pictures Aren’t in Russia

    One of the major concerns due to political news lately is that all of these pictures have been stored by the Russians since the company that makes FaceApp is in Russia. The truth is that these pictures are stored on servers owned by Google and Amazon. Many of the photo apps you use including some of the social media apps you frequent use the same server companies to store your pictures and posts. There is no evidence to suggest that your images are being collected by the Russian government or even companies in Russia.

    Your Photos are Deleted after 48 Hours

    The face app privacy policies state that photos uploaded to their servers are usually deleted after 48 hours. They do state that some photos may be kept for analytical purposes but that they are not sent to the FaceApp companies. These photos are used by the artificial intelligence to make it smarter and help it do a better job of editing photos for people.

    FaceApp Terms Mention Affiliate Companies and Governments

    The policies of FaceApp do allow for them to give your photos to other companies “in their network.” Again, they say that this is for analysis purposes and not data tracking. They also say that they’ll give your photos to law enforcement if requested through legal means. 

    You Can Use FaceApp Without Giving Personal Information

    The company that makes FaceApp says that 99% of their users don’t login to the app. That means there are no ways for them to have your personal or identifying information. The only thing that they collect in those cases are your photos. If you have location settings turned off for your camera then there isn’t much personal data that can be gained from the images. All they actually have is a picture of a non-identified person’s face. Also, FaceApp only uses the photos you tell it to upload. Not your whole camera roll. 

    “…please note that we may transfer information, including personal information, to a country and jurisdiction that does not have the same data protection laws as your jurisdiction.” FaceApp Privacy Terms

    FaceApp Doesn’t Handle Data Differently than any Other Social Media Service

    The only major difference between FaceApp’s privacy policies and those of Facebook and Instagram are how much terminology they use to describe them. Personal data and photos are basically handled the same way by all these companies. You may consider it more of a fair trade off for Facebook and Instagram to collect your data in exchange for the services they provide. You also may be less inclined to be worried because of Facebook and Instagram being from the United States. Either way, your data is being used in the same way by all of these companies.

    Musical.ly is now Tik Tok

     

    Just Share Smart

    These instances of public outcry about the privacy policies of an app or a company are a great time to be reminded of the importance of thinking before you share. The truth is that everything, once shared on the internet, is public domain. It belongs to every citizen of the web and not to you any more. This should govern every choice you make on every site you visit and every app you use. If you wouldn’t want the whole world seeing that photo of you, your child, or your spouse, then you shouldn’t share it. If what you are about to post as a status would put your security in jeopardy then you shouldn’t post it. If you aren’t sure about a company or an app that is asking for your personal information then you shouldn’t give them your personal info. It is very simple. Just think before you fill out an online form. Think before you share a photo. Think before you past your thoughts about anything and everything.

    The issue isn’t where your information is stored. It is the fact that you share photos, phone numbers, credit card numbers, and even your social security number like it is no big deal. You don’t have to be an internet security expert, you just have to pause and think.

     

  • YouTube May Have to Stop Making Money Off Our Kids

    YouTube May Have to Stop Making Money Off Our Kids

    The US Federal Trade Commission is finishing an investigation into YouTube’s Children’s Data and Ad policies and at least one member of Congress is now asking for YouTube to make some major changes. Massachusettes Senator, Ed Markey has officially requested that the FTC enforce some major policy shifts on Google for how YouTube handles advertisements to children and the collection of kids’ data.

    The request states that:

    Personal information about a child can be leveraged to hook consumers for years to come, so it is incumbent upon the FTC to enforce federal law and act as a check for the ever increasing appetite for childrens’ data. – FTC YouTube COPPA 2019

    This three-page document outlines a plan for rules that the FTC should enforce upon YouTube in order to keep them compliant with  COPPA and to better regulate their child advertising practices. The rules include requiring Google to stop collecting data from users under 13, requiring YouTube to develop a way to identify users under 13 and implement COPPA compliant policies, disallow influencers from marketing products geared towards children under 13, and forcing Google to create a fund for developing content meant for children that is ad-free and COPPA compliant. 

    COPPA imposes certain requirements on operators of websites or online services directed to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age.

    What Parents Should Know

    Parents have to be intentional about teaching their children about online privacy. Regulations from the FTC will, likely, be coming in the near future. Even if these changes aren’t as strict as the ones listed in the letter from Senator Markey, they will still cause major ripples in the YouTube creator and viewer community. The way that YouTube seems to try and handle these kinds of problems is by “demonetizing” videos that contain the type of content they are taking heat about. The heat they are getting from the FTC right now, though, is concerning some of the most profitable channels on any video sharing platform ever.

    Advertising is the way these companies make their money and collecting data is their sole model for targeting their advertising. If they aren’t allowed to target children anymore then there won’t be much content on YouTube for children at all. Our approach has always to only allow our kids to watch YouTube videos that we have selected and they must watch them on the television in the living room. That protects them from any surprises and we curate the types of videos they are allowed to watch. We also have YouTube Premium which removes ads. This is helpful since the algorithm that selects which ads show up on what videos often doesn’t take the age of the target audience into account. (i.e. an ad for the latest Childs Play film on a video about kids making slime.)

    As I always say, we should hold these companies accountable as much as possible but it falls to parents to be the responsible ones when it comes to our children’s digital health and online safety. What is your approach to YouTube, do your kids watch as much as they want? Do you limit their viewership on YouTube? Do you think this news will affect how much time you allow them to use the app? Let me know your thoughts in the comments below.

     

  • What Are Browser Cookies? How Do They Work?

    What Are Browser Cookies? How Do They Work?

    Facebook and Google have both had their major development announcement events over the past couple of weeks. They have both focused highly on privacy, and what they’re going to do to protect users data. This comes as no surprise because many governments have called them to action in this department saying that they have to protect their users’ data more securely. Since privacy is such a major topic as these events, the term cookies is being thrown around all over the place. You’ll see article after article talking about what Google is going to do with your cookies and what Facebook is doing with your cookies and how advertising companies are tracking you using your cookies. You even get a little pop-up banner when you go to a new website that says, “Hey there, we use cookies.”

    What’s the big deal about cookies?

    There was a day where when you would log onto a website, you were basically visiting it for the first time every time. Cookies help make sure that when you go to a website that website remembers you and may even remember what you did the last time you were there. Here is how that works: I open my browser and sign on to a site, as if to say: “Hi, my name is Michael and I’m going to www.teachmeaboutcookies.com.” That website hands me a “cookie” and says keep this for when you return. We’ll look for this cookie, and when we will remember you so you won’t be starting from the very beginning when you log on to our website.” That is called a first party cookie. First party cookies are how websites remember that you logged in so you don’t have to log in every time you go there or how Amazon remembers what’s in your shopping cart so you can go back to Amazon.com two or three times this week and add things to your cart and order it all at once later without having to log in again every time you go. That’s how a first party cookie works.

    Here’s how third-party cookies work. I go onto teachmeaboutcookies.com and they give me that cookie that I need to have so that I will be recognized again when I return to the website. That cookie is stored in my browser. However, there are ads on this page. There’s an ad from YouTube telling me to go watch these videos, there’s an ad from safe.becausefamily.org saying, “Hey, you should learn about tech safety interested in cookies.” These ads have little bits of code in the website you’re visiting and are now sending cookies to your browser and saving them there. Every time you go to any website on the internet with advertising it is adding more third-party cookies which are all being stored in your browser. All of that ad tracking data is saved in your browser through their cookies so when you go to other websites they will know what ads you’ve seen and responded to and will put ads for more things similar to that on other websites that you visit. That’s how third-party cookies work.

    The reason “browser cookies” has been in the news these days is not because of the first party cookie being put onto your website to make it easier and more convenient for you to use that website. We like not having to log in every time we visit a website, we are happy to go back to a shopping cart in which everything has been saved or revisit a form we started days ago and continue filling it out from right where we left off. We can do this because of first-party cookies. The latest issues are coming from third-party cookies. The government and many privacy agencies and internet safety experts, including myself, would like for companies to be held a little more accountable for what they do with those third-party cookies.

    What Parents Should Know

    Cookies and other web traffic information is often taken and then sold to help other ad agencies that you never connected with in the past create profiles on you. Then you can be advertised to more effectively and therefore buy more stuff. The issue gets even bigger because our kids are using these websites and apps and this data is being collected on them. Even they are having profiles created that track how they use the internet and the apps that they use for advertising purposes. Companies are beginning to wake up to the fact that people don’t want their data sold and traded all over the place like it’s the stock market. They’re starting to do things like Google announced with Chrome being more strict on how websites use cookies that they store on your browser. Safari has done the same thing recently. There are other browsers such as Brave or Duck Duck Go that are very strict on how advertising code is used when you surf the internet. There are even certain laws requiring companies to be transparent about how they use cookies on their website. That’s why you get those annoying little pop-ups that you just click ok on just to get them out of the way. I recommend you click more information next time go and see what they do with the information that they get from you and you browse their website. You might be surprised.

    Unfortunately, opting out of that cookie storage is not really that simple to do. However, you can go in and clear your cookies on any browser that you use. I advise you to ask yourself this question. Do the websites that I’m using need my web browsing information in order to serve me properly? Some websites actually do. Amazon can’t really sell me stuff I’m interested in if it’s not allowed to collect the type of data it needs to know what I’m interested in. Facebook can’t allow me to just log on a check my notifications real quick without entering a password without first-party cookies allowing it to. Ask yourself, “Does this website need my information to work? If so, great, I’ll allow it. Otherwise, if you’re just browsing or you’re just looking at something or if you’re on somebody’s blog or something then there is no reason for them to collect your information. You must protect your cookies.

    Listen to this article in the podcast below:

  • Facebook is Making a Dating Feature while Instagram Works to Curb Bullying

    Facebook is Making a Dating Feature while Instagram Works to Curb Bullying

    Social Media News from Facebook’s F8 Conference

    The F8 Developers Conference is Facebook’s annual event to showcase what they are working on in their numerous social media and messaging platforms. Tuesday’s announcements featured Facebook’s new features to connect people for romance and new friendships. Instagram is looking to stop bullying before it happens.

    Facebook Dating

    The dating feature for Facebook has been tested in several countries including Mexico, Thailand, and Canada. It will be rolled out soon to more countries and finally released in the United States “by the end of this year.The latest update to Facebook Dating allows you to build a secret crush list. This list of eight people will be saved and compared to the lists of your friends who also use Facebook Dating. If any of your crushes add you to their crush list you will both be notified so you can make a connection.

    I guess, if you’re going to try to make romantic connections on social media it is better to start with people you’re already friends with. Facebook says it will help you with connections based on your groups, likes, and comments on their app. Their goal is to help connect you to people with whom you share interested, thus, increasing the chance of you having a match. They actually said they are not trying to make connections for a one time hook up but to actually help you find someone you’d be interested in having a real relationship with.

    Facebook is also testing features that will recommend new friends based on your interests, location, work, and even what college you went to. Again, being tested in just a few countries, the Meet New Friends feature will allow users to opt-in and then customize their profile to tell the system what interests to prioritize while connecting them with new friends. You can even list what activities you’d like to do with new friends and then be prompted to send a private message to someone and make plans to do that activity.

    What Parents Should Know

    Fewer of our kids use Facebook now but there are those that still spend time on there. Dating and Friend Finding features can be problematic for parents who are concerned about their kids making unwanted connections on social media. My advice is to not allow your child on social media until around the age of 16 (based on their maturity) and even then keep open conversation with them about the kind of people they make friends with online. My rule will be to only allow my friends to communicate online with people they already know really well in real life.

    Instagram Fights Bullying

    While Facebook is trying to connect you with more people, Instagram is working to protect you from the people you’re already connected to. Developers have announced a tool that will nudge users to think twice before posting a negative comment on an Instagram photo. They can choose to ignore the advice and post it anyway, but Instagram is hoping that causing them to give pause will curb some of the negativity that Instagram is becoming known for. There are also tools in development that will allow users to block comments from certain users without blocking their accounts altogether.

    Just in case blocking comments isn’t enough of a break from the negativity, another Instagram feature will let you take a break by going into “away mode.” This is a way to sign off of Instagram for a while, no longer get messages, comments, and notifications or be prompted to post, but still not have to delete your account. Also, in an attempt to make Instagram “less pressurized” they are testing the ability to hide like counts.

    What Parents Should Know

    We have all heard stories of young people deleting or archiving photos because they didn’t get enough likes. We’ve read the horrible news stories about kids who harmed themselves, or worse, as a result of being bullied on Instagram or Snapchat. These efforts by Instagram to curb some of the negativity are a great idea. In my opinion, however, there is no better line of defense that parents. Our job is to create that safe space for our kids to come to when they have a question or concern about social media. We should be the ones determining how old they should be before they sign up for that Instagram profile. We should be who they come to when some stranger reaches out to them on Snapchat. That can’t happen if we aren’t aware or if we are too timid about the time they spend on social media. If we will take our role seriously we can raise kids who are healthy and whole.

    Listen to this post as a podcast below:

  • I Can’t Help You Protect Your Kids on Apps Meant for Adults

    I Can’t Help You Protect Your Kids on Apps Meant for Adults

    I field messages and requests all week long from parents who want the latest tools for keeping their kids safe online. They ask about everything from YouTube to Instagram to Snapchat and want apps that will monitor social media use, block adult content, and limit screen time. While there are resources to help parents in each of those departments, some apps just aren’t intended for your younger child. Unfortunately many parents have a real problem giving in to that fact.

    Streaming Videos

    Let’s look at YouTube as our first example. The video app was created in 2005 as a place for anyone to upload short videos to share with their friends. Google purchased YouTube in 2006 and Social Media became popular soon after, rocketing YouTube to the successful streaming platform it has become. The site is loaded with videos from filmmakers, vloggers, video gamers, makeup artists, geeks, professionals, educators, ministers, animators, artists, basically any category you can think of. It has evolved into an immovable force on which there are 300 minutes of footage uploaded every single minute. YouTube has come under fire for some of their content being too mature or sensitive and so they’ve employed algorithms to keep tabs on inappropriate videos. They also released n app for children called YouTube Kids. This app has also seen its share of controversy after YouTube has been unable to keep sensitive material from showing up in videos on the app.

    YouTube obviously wasn’t intended for young viewers. It is a site that is populated primarily by videos uploaded by its users. Some companies that make content for kids use YouTube but this is a choice by these companies in response to the popularity of the platform. It’s an attitude that says: “Kids are there, so we should be there too.” The goal is to reach the audience already there, not necessarily to build an audience on YouTube. There are no real parental controls (safe search is mostly useless) and videos that are labeled as kid friendly are done so without any human eyes ever seeing the entire video. The only time a content reviewer sees the video is when enough users of the site have flagged it as inappropriate. Allowing your kids to watch YouTube on their own is a risk that many parents don’t even realize they are making.

    What about Social Media?

    Snapchat, Instagram, or Facebook are all the same. They, like YouTube, feature content created and posted by the users of the service. This “User Generated Content” can vary from political or religious views, to silly cat videos or memes, and random personal updates that mean nothing to anyone. People also post updates on their serious mental health issues, they share about their plans to harm themselves or others, the post images of themselves in compromising situations, and that’s just what people post publicly. Private messaging contains content that people post when they think nobody except those they trust is watching. Private messaging is how predators groom their victims. It’s how the out of control teenage boy convinces the girl to send him inappropriate pictures of herself. Social Media is intended to be a place to connect with people, some you may know, some you don’t. It is meant to be a public forum and that which is meant to be private, is meant to be completely private. This is where the problems come in when parents ask for ways to monitor their kids social media.

    Age Rating vs Terms and Agreements

    I see a lot of parents giving their kids access to social media and other online activities when they reach the age of 13. This is based on the fact that the terms and agreements that these sites have you approve before making an account list 13 as the minimum age to use their service. A common mistake parents make is thinking that this age is meant to protect their kids from content on the site when, in fact, it’s intended to protect the company from having data and information on kids under the age of 13. COPPA laws say that companies can’t collect and use information of kids under 13 without parental consent. If a company says you can’t use the site if you’re under 13 then they can do whatever they want with all of that data and if your kid is underage, it isn’t their fault. You ignored the Terms and Agreements when you allowed them to use the site.

    Age rating is the age recommendation you’ll see in the app store when you are downloading and app. This age restriction is based on the actual content in the app, not any legal requirements for the company. The usual standard is that apps populated by user generated content are rated 17+. This is because the company can’t guarantee that what is seen on their product won’t be considered adult content. When we allow our kids to use apps that contain user generated content we are allowing them to be subject to the opinions, behavior, and whims of everyone else who uses that app.

    Parental Involvement Before Parental Control

    When I am asked to help parents protect their kids in apps that are obviously not made for children I feel like I’m being asked to give parents a suit their kids can wear to protect them while they play in a burning building. I get it. It isn’t easy to tell your kids they can’t do something they want to do. “My friends are all on Snapchat.” or the one that irritates me to no end, “The teacher/coach says I have to use Facebook to get the homework/practice schedule.” Sometimes we just have to say no. It is difficult to set the boundaries and limits that keep our kids safe but if we have the right attitude about what we’re protecting them from it becomes easier. Social Media, YouTube, video games that are rated M for mature, non of these things are intended for people under the age of 17 and when we allow our kids to use these products, we open them up to a world that is meant for adults.

    It is difficult for algorithms to catch nudity or violence in uploaded videos. Social Media sites and private messaging apps go to great lengths to keep prying eyes from seeing what is being sent. This makes parental monitoring software hard to develop. Unfortunately some burning buildings are just too dangerous and there isn’t much that can be done to protect you if you’re inside. If you aren’t ok with your child seeing content that is meant for grown ups then I recommend thinking about uninstalling that app instead of trying to find software that doesn’t it allow it to do what it was intended to do.

  • WhatsApp Update Brings Thumbprint and FaceID Lock To Private Messenger

    WhatsApp Update Brings Thumbprint and FaceID Lock To Private Messenger

    The private messenger, WhatsApp, has updated recently to allow users to lock the app from prying eyes by using their Touch or Face ID. Private messaging is becoming more important to users these days since the spotlight has been on Facebook and Google for their data mining and sales. WhatsApp has been a mainstay of private messaging for some time now and this new update takes privacy from an algorithmic/software level to a more obvious tangible place. You can now use your FaceID or TouchID, depending on the generation of your iPhone, to lock people out of the WhatsApp software entirely. This will keep people from opening the app and looking through your messages. Currently this feature is available for iOS only but it is rumored to roll out to Android soon.

    What Parents Should Know

    It’s important to know that there are options that allow you to keep an eye on your kids’ messaging without having to physically take their phone from them. However, if the physical approach is your style then this update from WhatsApp could become a problem for you. Messages being locked in this way needn’t deter you from checking up on your child’s messaging activity, though.  You can store your thumbprint in your child’s device so you can unlock it or just make them unlock the app for you when it comes time to inspect their messages.

    I recommend allowing your children to have a feeling of privacy by using some sort of software to monitor their messaging apps instead of taking the device from them every now and then. Not only does that plan give them a feeling of privacy, it is also a far better monitor then your weekly check up. If a message monitoring algorithm like Bark is active it will look at every single message your child sends or receives in real time, notifying you if any of those messages cross the line to dangerous or inappropriate content. Taking the phone from them to monitor it yourself allows messages to be removed before you get around to looking at it.

    I never advise spying on your children without their knowledge. They should know that you are keeping an eye on their messages and how the software works. They should also know what the consequences are if they send messages they shouldn’t be sending. Finally, you should have an open conversation to allow them to feel like they can come to you if they receive a message they are not comfortable with. No matter what you do to monitor your kids messaging, having a culture of transparency and openness in your home is critical.

  • How Can Artificial Intelligence Protect My Family?

    How Can Artificial Intelligence Protect My Family?

    How AI Works

    When you think of artificial intelligence it’s natural to imagine Skynet or some similar software that is running things for us some day. While that could be the overall goal someday, right now AI is nowhere near that smart. Currently artificial intelligence isn’t intelligent at all. While it does learn from the input that is fed to it, there is currently no way for AI to decide what it needs to learn on its own. There is a very large gap between software algorithms that can learn and an intelligent software that makes its own decisions.

    At CES in 2018 I watched a robot named Aeolus glide across a room cleaning up. It took it a solid three minutes to move from one side of the makeshift living room, reach down and pick up a wii remote, and roll to the table to set it down. It was nothing like we have been promised by television and movies but I guess it was still cool. What parents should understand is that while the developers of an AI can make promises of their algorithms learning and behaving as if they have intelligence, that is not the same as being actually intelligent. Humans still have to do the thinking.

    While it isn’t foolproof and is definitely not sentient, artificial intelligence is a good tool. There are many ways AI is useful and much of the latest hardware and software use AI  to do some of their most minor functions. Here are some of the interesting ways AI can make your parental control and accountability tools even better.

    Filters

    There was a day when an internet filter depended solely on the web or ip address of the site you were visiting to tell if there would be inappropriate content or not. There was a master list that had to be updated continually with new websites and key words. AI is different than that because the filter is based on images and other content that the AI was “fed” over and over again the algorithm then detects actual images, text, and videos on web pages instead of just the address of the site you are visiting. This can be helpful if a website doesn’t typically contain adult content but a certain article or comment section features material that would cross the line. A traditional filter couldn’t catch that but one that uses an AI can.

    Circle (meetcircle.com) and NetNanny (netnanny.com) are examples of filters that use smart algorithm to block web content.

    Accountability

    Accountability software works very similarly to filters except that when it sees something inappropriate it will not block it but alert whoever is on the list to alert. AI has revolutionized this sort of software because it allows parents to receive only lists of unwanted sites instead of having to sort through everything that has been viewed by the person they are keeping accountable. The software I recommend, Accountable2You (accountable2you.com promo code BecauseFamily,) is updated constantly to allow it’s algorithm to properly and effectively scan for adult content. It works very well. You may get occasional alerts for content that shouldn’t be considered adult, but it’s not too often and it’s worth it for the peace of mind.

    Privacy and Security

    Finally, when we discuss AI and algorithms we must talk about privacy and security. Algorithms may have been the beginning of many of our privacy problems but it may also be providing some solutions. Tools like BitDefender can be used to protect your home network. The AI can tell the difference between forgotten passwords and malicious login attempts. Our home networks are becoming increasingly worthy of being targets of hackers and encrypting your web traffic with AI can protect your from that kind of attack.

    I hear a few different reactions when I talk about artificial intelligence. Most people roll their eyes or glaze over because they aren’t even interested. It’s some tech term that they don’t think they can fully understand so they’d rather not talk about it. The other group is super interested, always wanting to learn more about it and understand it better. These are my nerd friends. I love them. Finally there’s the group that just freaks out. They immediately think of the movies and tv shows and just want to move into the woods and unplug. Which person are you? Are you willing to let AI work to your benefit in your family? Is it all too much for you? Let me know in the comments below.