Uzbek, Other Video Game Bans Straddle Cultural Divides

Posted June 16th, 2017 at 11:30 am (UTC-5)
Leave a comment

FILE - A gamer plays 'Call of Duty: Black Ops"' in Los Angeles, California. (Reuters)

FILE – A gamer plays ‘Call of Duty: Black Ops’ in Los Angeles, California. (Reuters)

Uzbekistan recently joined a long list of countries that ban video games for one reason or another. And while some games will get banned no matter the argument, an industry expert says a delicate balance is needed to satisfy local market expectations without sacrificing artistic expression.

Video games may be a product of Western society, but they have become part of popular culture, not just in the West. And as cultural mosaics, they often get banned – even in Germany, Ireland, and Australia, not to mention Uzbekistan, Iran, China, North Korea – countries that run the gamut from the conservative to the totalitarian.

“All creative media, including games, are carriers of culture regardless of how intentional that might be,” said Kate Edwards, Executive Director of the International Game Developers Association (IGDA) in an email interview with Techtonics. IGDA is a nonprofit professional association representing thousands of video game developers worldwide.

“They carry certain perspectives, assumptions and biases about history, politics, cultures, faiths, and so forth,” she added. “And as such, they can become targets of backlash when these sensitive topics aren’t handled appropriately.”

Not a year goes by without “at least one incident of game content being flagged or banned because it contains something that conflicts with local cultural values and/or expectations,” she said. Typically, the main objections are sex, violence, and profanity.

FILE - People stand near 'Sims 4' game characters on a wall during the 2014 Electronic Entertainment Expo, known as E3, in Los Angeles, California June 11, 2014. (Reuters)

FILE – People stand near ‘Sims 4’ game characters on a wall during E3, the Electronic Entertainment Expo, in Los Angeles, California, June 11, 2014. (Reuters)

Uzbekistan’s recent ban of more than 30 games, including The Sims, an innocuous life simulation game, and the more infamous Grand Theft Auto, cites violence, pornography, distorting values and social and political destabilization. The list of banned games is long and includes popular titles like Mass Effect, Call of Duty: Black Ops, and Assassin’s Creed: Brotherhood, to name a few.

Some of these titles are combat games. Others are decidedly violent, albeit to varying degrees. But Mass Effect, for example, was banned in Singapore for same-sex scenarios. And the same is probably true with Uzbekistan’s ban of The Sims 4.

“Uzbekistan is a Muslim country, such an allowance would be frowned upon,” said Edwards.

Egypt and Saudi Arabia banned Nintendo’s augmented reality game, Pokemon GO, for being unislamic, encouraging gambling and polytheism, and for perceived political symbolism. But the game was banned in other countries as well because players focused on catching virtual Pokemon characters on their mobile phones were trespassing or risking their lives stumbling into Bosnia’s mine fields, for example.

Games with extreme violence and drug use have been banned in European countries and Australia, for example. Germany took issue with Dead Rising 3 for depicting humans as enemies and understandably frowns on games with Nazi references.

But there are other reasons as well. Pakistan banned Call of Duty: Black Ops 2 and Medal of Honor: Warfighter for their poor representation of the county, and Iran banned Battlefield 3 because of a scene that lays siege to Tehran.

FILE - Visitors play Battlefield at the Paris Games Week, a trade fair for video games in Paris, France, Oct. 29, 2016.

FILE – Visitors play Battlefield at the Paris Games Week, a trade fair for video games in Paris, France, Oct. 29, 2016.

While Beijing has a refined set of criteria by which to judge a video game, the destruction of China was a key feature that landed Command & Conquer: Generals on its list of banned titles.

“From the local perspective, they’re trying to allow content that more or less meets the local expectations of their society, based on their values, mores, etc.,” said Edwards. “So when a ban is enacted, it often comes from a position of cultural protectionism or in a more extreme case, from a position of trying to control the local public mindshare around specific issues [e.g., mindshare protectionism is the bedrock rationale for the Great Firewall of China].”

As part of her “culturization” work, Edwards frequently communicates directly with governments to discuss potential offenses and appropriate solutions. “In some cases,” she noted, “there is no recourse and they want to ban the game outright.”

But in other situations, they are more open to dialogue about the issues at hand. “And we can negotiate a fix that still can serve the creative vision of the game while also making the content more compatible with local expectations,” she said. “In my years, such negotiations and discussions have included China, India, South Korea, Morocco, Turkey, Saudi Arabia, UAE, Greece, Singapore, and many others.”

The augmented reality mobile game 'Pokemon Go' by Nintendo is shown on a smartphone screen in this photo illustration taken in Palm Springs, California U.S. July 11, 2016. (Reuters)

FILE – Nintendo’s augmented reality mobile game ‘Pokemon Go,’ banned in many countries, is shown on a smartphone screen in this photo illustration taken in Palm Springs, California, July 11, 2016. (Reuters)

One could argue banning video games is a form of censorship, but Edwards cautioned that that definition depends on what the “censors” intend. “Are they trying to sway perception in a specific direction? Are they offended by how their culture, history, and/or faith were depicted in the game?”

“Some bans may not be as grievous if we better understand the motivation behind them,” she said. “Ideally, every country would be open-minded to all content and let the consumer decide what they consume. But we know this isn’t the case – not even in the U.S.”

That said, Edwards stressed that while game developers need to feel free to create whatever they want to create, they also have to “be mindful of the local expectations of various markets” if they want their games to be enjoyed worldwide.

“If they’re eager to share their game with more challenging content markets like China, the Middle East, and so forth,” she cautioned, “they have to be prepared to make changes to their content to make it more compatible with local expectations. Most typically, such changes are very small or surgical in nature and rarely disrupt the overall vision of the game.”

But she also implored governments to “strive to better understand the games medium and treat it with the same artistic fairness that is often more afforded to film and literature.”

“Games are a powerful medium that represent[s] a major cultural artifact of our time,” she said. “And it behooves countries to better understand what games represent – not just as a product for import, but as a legitimate form of artistic expression.”

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

Mobile Phishing’s New Tricks; Older Samsung Devices Left Vulnerable

Posted June 15th, 2017 at 12:32 pm (UTC-5)
Leave a comment

Today’s Tech Sightings:

FILE - A worker is silhouetted against a computer display showing a live visualization of online phishing and fraudulent phone calls across China during the 4th China Internet Security Conference (ISC) in Beijing, Aug. 16, 2016.

FILE – A worker is silhouetted against a computer display showing a live visualization of online phishing and fraudulent phone calls across China during the 4th China Internet Security Conference (ISC) in Beijing, Aug. 16, 2016.

Login-stealing Phishing Sites Hide Their Evil With Lots of Hyphens in URL

Malicious websites are using mobile-focused phishing attacks that are adding enough hyphens to their original domain names so that they are too long to be visible in the address bar. Researchers with security firm PhishLabs say the emerging trend is specifically geared for mobile devices and is part of a wider campaign to steal credentials on sites that use email and password authentication.

Researchers: Samsung Left Millions Vulnerable to Hackers

Anubis Labs security researchers say the world’s most popular smartphone maker let millions of customers using older phones vulnerable to hackers. The researchers say Samsung neglected to renew the domain that controls a stock app installed on older phones, thereby allowing anyone to stake a foothold to push malicious apps to millions of smartphones.

Ransomware: The Most Important Thing You Can Do Not to Be a Victim

First of all, whatever you have on your computer should be backed up – somewhere else. So when a message pops on your screen to let you know your files have all been locked down until you pay ransom to release them, you have a backup ready to restore your system without paying a dime. And according to writer Jack Wallen, backing up your data daily is the single most important thing to do, next to making sure all critical operating system updates are installed.

More:

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

Google Store Flooded With Fake Apps; What Fake News Services Cost

Posted June 14th, 2017 at 12:59 pm (UTC-5)
Leave a comment

Today’s Tech Sightings:

FILE - An illustration shows a 3D printed Android logo in front of code. (Reuters)

FILE – An illustration shows a 3-D-printed Android logo. (Reuters)

Malicious Fake Protection Apps Flood Google Play Store

In the aftermath of the infamous WannaCry ransomware, criminals are now flooding the Google Play Store with fake ransomware protection tools. Researchers with security firm RiskIQ found hundreds of cases where apps available for download claim to protect mobile phones, but in fact expose them to new threats. While Google has a mechanism in place to protect Android users from malicious apps, some still make it through the cracks. And Apple iOS devices aren’t immune either. Researchers at Fortinet found two types of potentially destructive Mac ransomware advertised on the dark web, the unindexed part of the internet frequented by criminals.

Report: 12-Month Fake News Campaign to Influence Elections Costs $400,000

A new report from cybersecurity company Trend Micro sheds light on the dark marketplaces where those looking to create artificial narratives and trends and buy influence can pick and choose the fake news format that works for them. The company examined fake news underground shopping sites from around the world, and found services for “discrediting journalists,” “promoting street protests,” “stuffing online polls,” and “manipulating a decisive course of action,” such as an election, selling for anywhere between $2,000 and $10,000.

London Blaze Shows Facebook’s Safety Check Is Deeply Flawed – Again

As a huge fire engulfed a Central London high-rise that claimed the lives of at least 12 people, Facebook activated its Safety Check to help people ensure their loved ones are safe. But writer Mathew Hughes complains that safety prompts are badly targeted, nudging people far away from the scene to check in. And the list of people users get from Facebook for a safety check often  don’t even live in the same area or city of a given disaster.

More:

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

Two-thirds of World Now on Mobile; US Election Hack Hit 39 States

Posted June 13th, 2017 at 12:46 pm (UTC-5)
Leave a comment

Today’s Tech Sightings:

Passengers use their mobile phones as they ride a train in Bangkok, Thailand, June 12, 2017. (Reuters)

Passengers use their mobile phones as they ride a train in Bangkok, Thailand, June 12, 2017. (Reuters)

5 Billion People Now Have a Mobile Phone Connection

Two-thirds of the world’s population is now connected on mobile phones, according to GSMA Intelligence, a research group operated by GSMA, which represents the interests of mobile operators worldwide. GSMA Intelligence’s real-time tracker shows just a bit over 5 billion mobile users globally – a milestone by any measure.

Russia Hacked US Electoral System in 39 States

Investigators in Illinois say Russian hackers reached far deeper into the U.S. electoral system during the 2016 presidential election than previously thought. Evidence was found that shows the hackers penetrated voter databases and software systems in at least 39 states and tried to delete or alter voter data. Writers Michael Riley and Jordan Robertson say the hackers accessed software designed for poll workers on Election Day. Their intrusions were of such scope and sophistication that the Obama administration complained directly to Moscow on back channels.

We’re All Going to Be Crazy Data Hogs by 2022

A new report from Ericsson projects that 15 percent of the global population will have 5G connectivity, the next generation of wireless technology, by 2022. North America will lay claim to about 25 percent of 5G subscriptions. But while the percentages are up by 10 percent, compared to previous predictions, don’t expect 5G technology to be widely available any time soon.

More:

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

AI Ethics Missing as Machine Learning Advances

Posted June 9th, 2017 at 11:53 am (UTC-5)
Leave a comment

(T. Benson for VOA/Techtonics)

Teaching a computer to “think” the way the human brain does means feeding it huge amounts of real-world data so that it can learn, analyze, predict, and solve problems. But in this brave new world of artificial intelligence (AI) and machine learning, there are no ethical guidelines, no regulations, and no parameters to govern how this data is collected and used.

Artificial intelligence is a computer science branch that aims to develop computers that can learn and solve problems, much as a human brain does.

When BM’s AI supercomputer Watson is enlisted to help doctors tailor therapy to breast cancer patients, it needs to consume high volumes of medical data before it can provide better insights into personalized treatment options and their outcomes.

Every time you pick a Netflix movie or ask your digital voice assistant to call a friend, you are dealing with AI. Your personal habits, behavior, and information are tracked and noted so that the AI system has a pattern that it can use to determine the products and services that best suit your needs – or close.

“So on the one hand, the companies are saying ‘if you give us this data, we’ll give you better services, more personalized services,’” said Ben Lorica, Chief Data Scientist for O’Reilly Media, which provides technology and business training.

Machine Learning is an aspect of artificial intelligence. Its goal is to build machines that can interpret data so that they can learn to improve their performance, provide new insights, and solve problems.

“On the other hand,” he added “… it’s unclear how long they retain the data, who they share the data with and what kinds of privacy protection is placed around the data.”

People who are now using machine learning to analyze consumer and user behavior already have the data available to them. “What AI is opening up to them is a different set of techniques to analyze roughly the same data.”

While AI and machine learning tools don’t make it easier to collect data, they “dramatically change how the collected data is ‘used,’” said Cornell Tech Computer Science professor, Vitaly Shmatikov, in an email.

But in the absence of any regulations or ethical guidelines, he believes it is important to minimize this type of data collection, at least until there is better understanding of how these tools are being used, how they process data, and what they are extrapolating from it.

“We don’t yet fully understand what can be learned from the data,” he added. “Sometimes the data collected for one purpose [e.g., location data] reveals a lot of sensitive information about individuals, intentionally or unintentionally.”

FILE - Attendees interact with Commerce Bot, a robot that provides customer service with artificial intelligence technology and voice recognition, at SK telecom's stand at the Mobile World Congress in Barcelona, Spain, Feb. 28, 2017. (Reuters)

FILE – Attendees interact with Commerce Bot, a robot that provides customer service with artificial intelligence technology and voice recognition, at SK telecom’s stand at the Mobile World Congress in Barcelona, Spain, Feb. 28, 2017. (Reuters)

Powerful data analysis technologies based on machine learning tend to reveal information about people “that may not be explicit in the data but can be inferred from it – social relationships, political and religious affiliation, sexual orientation, etc.,” he said. And they “may reveal a lot of information beyond the purpose for which it was collected.”

AI algorithms are predefined instructions that set a process in motion. An algorithm can help weed out fake stories or undesirable job applicants, for example, but depending on the quality of the instructions, the process could go wildly wrong.

AI algorithms similar to those used to track fake news, for example, are a particular concern because they may “make decisions and recommendations that do not meet our priorities and values as a society,” such as who gets hired or fired, who gets credit, or who gets a prison sentence.

“In these situations,” he added, “it is important to ensure that AI algorithms are fair, unbiased, accountable, and transparent.”

For now, technology companies are the ones leading the charge toward implementing machine learning, sometimes in partnership with other interested parties. But Lorica expects the field to open up significantly.

“You [will] see a wave of companies developing tools specifically for advertisers or e-commerce sites,” he said. “You just license the technology from these other companies. We’re also seeing … software-as-a-service, which makes it even easier for companies to use these tools. … We will see maybe the rise of a new set of companies who are selling tools that take advantage of these new techniques.”

Sophia, a robot integrating the latest technologies and artificial intelligence developed by Hanson Robotics is pictured during a presentation at the "AI for Good" Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland June 7, 2017. (Reuters)

Sophia, a robot integrating the latest technologies and artificial intelligence developed by Hanson Robotics is pictured during a presentation at the “AI for Good” Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland June 7, 2017. (Reuters)

Fortunately, there are now groups and data scientists that are aware of ethical concerns surrounding data use in machine learning. And a lot of companies are already discussing transparency and fairness, and ethics training for data processing and machine learning algorithms.

But Lorica believes more can be done. He suggests adding AI ethics to the data science curriculum in companies as they onboard a new set of data scientists. “Or in the case of universities, then maybe fairness and transparency in [machine learning] is an important part of the training.”

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

New Robots Debut in Shanghai; Russian Malware Found on Instagram

Posted June 8th, 2017 at 12:51 pm (UTC-5)
Leave a comment

Today’s Tech Sightings:

In this image taken from video, visitors look at iPal robots during the Shanghai CES electronic show in Shanghai, China, June 8, 2017. (AP)

In this image taken from video, visitors look at iPal robots during the Shanghai CES electronic show in Shanghai, China, June 8, 2017. (AP)

Companion Robots Featured at Shanghai Electronics Show

More than 50 Chinese companies are showcasing a new generation of robots at Shanghai’s CES electronics show. The robots have heightened dexterity and skills and are being touted as potential home companions, shopping attendants or entertainers.

Russian Malware Controls Hiding on Britney Spears’ Instagram Page

Researchers at security firm Eset have discovered specially-crafted comments on pop singer Britney Spears’ Instagram account that allow malware to surreptitiously learn command server locations in order to control them. The seemingly-benign comments are thought to be the work of the notorious Turla group that is associated with Russian hackers.

Chinese Exam Authorities Use Facial Recognition, Drones to Catch Cheats

Education authorities in China are going high-tech to catch cheaters during high-school exams. Using facial and fingerprint recognition, metal detectors, drones, and cellphone blockers, education officials are keeping a close eye on millions of students taking the annual university entrance exam. In past years, students used wireless devices disguised as belts or watches to communicate with accomplices outside the exam room.

More:

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

China Loves Social Media; Wikimedia’s Use of Donor Funds Questioned

Posted June 7th, 2017 at 1:30 pm (UTC-5)
Leave a comment

Today’s Tech Sightings:

FILE - A picture illustration shows icons of WeChat and Weibo app in Beijing, (Reuters)

FILE – A picture illustration shows icons of WeChat and Weibo app in Beijing, (Reuters)

Study Shows Chinese Okay With Social Media Being Bad for Health

An overwhelming majority of Chinese social media users – nearly 90 percent – are unperturbed by the negative effects of excessive online use, according to a national study from research group Kantar. Despite loss of sleep and privacy and other negative effects, young Chinese in their 20s are likely to continue using social media, albeit with some modification to bad habits.

Singapore Parent Takes School to Court to Get Kid’s iPhone Back

A Singapore man sued his son’s school after it confiscated the iPhone he was using during school hours, in breach of regulations. The school wanted the iPhone confiscated for three months, but the child’s father demanded its immediate return. The court threw out the complaint, saying returning the iphone would send a wrong signal to students “that they can use their mobile phones during school hours with impunity.”

Wikimedia Foundation’s Donor Money Funds Outgoing Managers’ Nest Eggs

The nonprofit Wikimedia Foundation, a global community of volunteers who provide content for the free online Wikipedia encyclopedia, paid outgoing managers up to half a million dollars in severance pay in 2015. The organization’s much-delayed 2015-2016 tax disclosures reveal that Erik Moller, who came up with the failed Wikinews service and the more successful Wikimedia Commons, received $208,306 in severance pay and two years of leave. Writer Andreas Kolbe argues the foundation can afford this luxury, but wonders if this is what donors have in mind when they respond to the appeal to “keep Wikipedia online and growing.”

More:

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

Google Teaches Kids About Online Safety; Email Scams on the Rise

Posted June 6th, 2017 at 12:35 pm (UTC-5)
Leave a comment

Today’s Tech Sightings:

A Google icon is shown on a mobile phone in Philadelphia, April 26, 2017. (AP)

A Google icon is shown on a mobile phone in Philadelphia, April 26, 2017. (AP)

Google Is Using Games to Teach Kids About Online Safety

Learning how to watch your step online to avoid malware and other pitfalls is an acquired skill. And Google has just launched a new program called “Be Internet Awesome” to teach young people how to make smart decisions online. The program includes a game, along with a curriculum for schools and a video series for parents to watch with their children.

Amazon, Kickstarter, Reddit, Mozilla Are Staging Net Neutrality Online Protest

Several big internet names have declared July 12 a “day of action'” to protest the rollback of net neutrality rules by the U.S. Federal Communications Commission (FCC). The FCC’s decision cancels rules introduced by the Obama administration to regulate internet providers and the way they use consumer data. Amazon, Kickstarter and Mozilla are some of the participants who will change their websites on July 12 to raise awareness about the FCC’s decision.

Email Impersonation Attacks Rise 400 Percent

A new report from cloud email management firm Mimecast found a 400 percent increase in email impersonation attacks during the last quarter. What happens is that hackers or criminals impersonate business employees, executives or partners to trick a victim into sending wire transfers or data that can be sold or monetized. The company said billions of dollars have been lost to these scams in recent years.

More:

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

‘CrowdGuard’ Fights India’s Sexual Violence, Bystander Apathy

Posted June 2nd, 2017 at 11:30 am (UTC-5)
Leave a comment

A screenshots from a recent CrowdGuard presentation shows the app's user interface. In cases of danger, the user gets an alert that displays information about another user in need of help. (CrowdGuard)

A screenshot from a recent CrowdGuard presentation shows the app’s user interface. In cases of danger, the user gets an alert that displays information about another user in need of help. (CrowdGuard)

India has witnessed some harrowing acts of sexual and gender-based violence in recent years, some occurring in front of witnesses who watched, but did nothing. To combat this apathy, a social enterprise has come up with a crowdsourced solution that harnesses a trusted community to help people under attack.

Philip Sunil Urech has seen it all before, especially in India. “I observed a lot of situations … where people were actually requiring help,” he said, but bystanders and civilians passing by simply ignored the situation.

“I later learned that this was a cultural norm,” said the CEO of Crowdtect, a for-profit firm that develops human emergency response systems. People are taught from an early age “not to interfere” with others beyond their extended family “even if they are facing a very hard time or are in real danger.”

Inspired by the experience, Sunil co-founded CrowdGuard, a smartphone and Internet of Things (IoT) platform that mobilizes a community of trained students, volunteer organizations, commercial venues, and work places to respond quickly in emergency situations.

“Every user is a potential helper,” he told Techtonics, “but every user also potentially can reach out for help.”

Amit Ratnakar, Crowdtect UI Designer, works on the CrowdGuard app interface, in West Delhi, India. (CrowdGuard)

Amit Ratnakar, Crowdtect UI Designer, works on the CrowdGuard app interface, in West Delhi, India. (CrowdGuard)

The mobile app connects all of these communities and offers users several easy ways to send out an emergency appeal when in danger. A user can press an emergency button, pull out the headphone jack, or press the power button three times to send an emergency alert.

The alert sends the location and identity of the person in danger to the community. Users in proximity can meet up with others as they navigate to the scene. And if potential helpers are farther away, the app increases the search radius to alert the nearest available members of the community to the danger.

Once on the scene, the crowd serves as a witness and a potential deterrent to the assault until police arrive. A chat function gives all users an all-clear alert when the situation is resolved. And a built-in mechanism keeps track of related police work.

The platform consists of several other layers, including education and compliance.

The education side is crucial to changing the bystander effect. Bystanders in a large crowd are unlikely to take action in an emergency either because they believe others will act or because they would rather wait to take the cue from those around them. CrowdGuard helps users understand crowd dynamics and raises awareness about safe intervention, citizen rights, sexual and gender-based violence, and filing complaints with police.

CrowdGuard storytellers Leena Wadhwa (L) and Garima Bansal (C) engage college students in New Delhi about the CrowdGuard app and their personal role in providing community safety as active bystanders. (CrowdGuard)

CrowdGuard storytellers Leena Wadhwa (L) and Garima Bansal (C) engage college students in New Delhi about the CrowdGuard app and their personal role in providing community safety as active bystanders. (CrowdGuard)

On the compliance side, the platform ensures that local laws against sexual harassment are being observed.

“We support these communities in becoming compliant with the POSH Act – the Prevention of Sexual Harassment Act of 2013 – which obligates basically every organization in India to implement certain measures to prevent sexual harassment,” said Sunil. “And we did that in order to leverage the inter-personal trust and [assist] communities.”

But reaching out for help on smartphones has its limitations, he said. “We are sometimes facing the issue that smartphones which run CrowdGuard applications and which connect to the CrowdGuard platform have no power left after a long day … or we have the issue that there is no mobile phone network.”

While the app also uses SMS messaging as a backup for emergency alerts, Sunil believes using an IoT device to house CrowdGuard is a better alternative.

A wearable IoT device running on a small battery and very little energy can reach farther if the user is underground and can connect over longer distances, he said. Crowdtect’s device, still in development, would not be tethered to a smartphone.

“We have to put it in a shell,” he added. “Most probably it will become an amulet that you wear around your neck or attach it to your bag. And we’re having two trigger mechanisms.”

The technology is still being tested for reliability, but it is in the final stages of development. Crowdtect has just wrapped up an eight-week mentoring program in Washington with PeaceTech Accelerator, a partnership between PeaceTech Lab, C5 Accelerate, and Amazon Web Services dedicated to scaling startups around the world.

“They are preparing for launch, virtualizing the existing education parts,” he said. “We are in the pilot stage, and we are increasing the network of community partners. … So once we launch, we have a minimal density in the urban areas of Delhi, and we are kind of building up.”

Sunil hopes CrowdGuard will have a “social impact on the ground” and help build a network of safe spaces which would benefit everyone, even if they’re not connected through the application.

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

India Mobile Use on the Rise; China’s New Law Expands ‘Great Firewall’

Posted June 1st, 2017 at 12:35 pm (UTC-5)
Leave a comment

Today’s Tech Sightings:

A Kashmiri shopkeeper browses internet on his mobile phone as he waits for customers outside his shop in Srinagar, Indian controlled Kashmir, April 26, 2017. (AP)

A Kashmiri shopkeeper browses the internet on his mobile phone as he waits for customers outside his shop in Srinagar, Indian controlled Kashmir, April 26, 2017. (AP)

Internet Trends Report: India Is Definitely World’s Next Major Tech Market

Thanks to cheaper smartphones and data plans, India now has 355 million internet users. That’s about 27 percent of its population of 1.3 billion – up from 277 million in 2015. That means mobile internet use is also on the rise, accounting for up to 80 percent of all web traffic, which is higher than the global average of 50 percent. There are challenges, however, before India becomes that much-coveted market, including education and infrastructure, lack of purchasing power, stringent regulations and expensive registration for startups, to name a few.

EU: Social Networks Are Getting Better at Reviewing Hate Speech

European Commission officials say social media giants have responded to calls to act on hate speech. According to the officials, Facebook did a better job at tackling hate speech complaints in the last six months than Twitter, YouTube, and Microsoft and responded on them within the EU’s specified 24-hour window. The companies registered a 40 percent improvement over the past year in terms of reviewing and removing hate speech.

China Cybersecurity Law Will Keep Citizens’ Data Within the Great Firewall

China’s new cybersecurity law went into effect Thursday. It mandates that all personal information and data used by citizens and companies be stored on Chinese servers in the country. The law imposes severe restrictions on the transmission of scientific or technological data overseas and applies to a wide range of social media and internet firms, including foreign entities. Companies that fail to seek permission before exporting data overseas or violate the new rules risk being blacklisted or having their license revoked.

More:

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.