Product Promotion Network


Mountain bike reviews, trails reviews, bike parts and components, buy …

Zoic Falcon shorts and Impact liner review[1]

Paul Andrews Wed Jul 11th, 2018 9:10 AM

Zoic’s Falcon shorts offer lightweight, stretchy comfort in a rugged, durable, and stylish package.

Pairing them with the Impact liner shorts, riders get the added benefit of hip, thigh and tailbone protection.. Read More >>[2]


  1. ^ Zoic Falcon shorts and Impact liner review (
  2. ^ Read More >> (

Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems

Tech leaders, including Elon Musk and the three co-founders of Google’s AI subsidiary DeepMind, have signed a pledge promising to not develop “lethal autonomous weapons.”

It’s the latest move from an unofficial and global coalition of researchers and executives that’s opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry would be “dangerously destabilizing for every country and individual.”

The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity.

The institute has previously helped issue letters from some of the same individuals, calling on the United Nations to consider new regulations for what are known as lethal autonomous weapons, or LAWS. This, however, is the first time those involved have pledged individually to not develop such technology.

Signatories include SpaceX and Tesla CEO Elon Musk; the three co-founders of Google’s DeepMind subsidiary, Shane Legg, Mustafa Suleyman, and Demis Hassabis; Skype founder Jaan Tallinn; and some of the world’s most respected and prominent AI researchers, including Stuart Russell, Yoshua Bengio, and Jurgen Schmidhuber.

Max Tegmark, a signatory of the pledge and professor of physics at MIT, said in a statement that the pledge showed AI leaders “shifting from talk to action.” Tegmark said the pledge did what politicians have not: impose hard limits on the development of AI for military use. “Weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way,” said Tegmark.

So far, attempts to muster support for the international regulation of autonomous weapons have been ineffectual. Campaigners have suggested that LAWS should be subject to restrictions, similar to those placed on chemical weapons and landmines.

But note that it’s incredibly difficult to draw a line between what does and does not constitute an autonomous system. For example, a gun turret could target individuals but not fire on them, with a human “in the loop” simply rubber-stamping its decisions.

They also point out that enforcing such laws would be a huge challenge, as the technology to develop AI weaponry is already widespread. Additionally, the countries most involved in developing this technology (like the US and China) have no real incentive not to do so.

Paul Scharre, a military analyst who wrote a book on the future of warfare and AI, told The Verge this year that there isn’t enough “momentum” to push forward international restrictions. “There isn’t a core group of Western democratic states involved, and that’s been critical [with past weapons bans], with countries like Canada and Norway, leading the charge,” said Scharre.

However, while international regulations might not be coming anytime soon, recent events have shown that collective activism like today’s pledge can make a difference.

Google, for example, was rocked by employee protests after it was revealed that the company was helping develop non-lethal AI drone tools for the Pentagon. Weeks later, it published new research guidelines, promising not to develop AI weapon systems. A threatened boycott of South Korea’s KAIST university had similar results, with the KAIST’s president promising not to develop military AI “counter to human dignity including autonomous weapons lacking meaningful human control.”

In both cases, it’s reasonable to point out that the organizations involved are not stopping themselves from developing military AI tools with other, non-lethal uses.

But a promise not to put a computer solely in charge of killing is better than no promise at all.

The full text of the pledge can be read below, and a full list of signatories can be found here:

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.

There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.

Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.

Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

Netflix Misses Subscriber Growth Target Amid Growing Competition

Is Netflix’s growth slowing down? During the second quarter, the video streaming service came up short of adding 6.2 million subscribers. Instead, Netflix missed its projections by a million subscribers, sending its stock price tumbling.

“We had a strong but not stellar Q2,” the company said in a letter[1] to shareholders on Monday. In total, Netflix now has 130 million subscribers, which is certainly a lot. But during the second quarter, the company added only a mere 670,000 new users in the US, down from a projected 1.2 million.

In a conference call[2], Netflix executives played down the missed projections, and said it wasn’t due to any recent price increases[3] to the streaming service. The company’s chief financial officer David Wells explained that Netflix may have simply “over-forecasted” the numbers, after four straight quarters of “under-forecasting” them, and beating the estimates. “Our total addressable market is intact and hasn’t really changed,” Wells added. “We’re still on track for a strong growth year.”

However, Paul Verna, an analyst at eMarketer, said competition in the streaming space may be to blame. “This is isn’t entirely surprising given rising competition in the video streaming market, where Amazon, Hulu, HBO and others are gaining share of subscription video dollars at Netflix’s expense,” he said in a statement. To grow the business, Netflix has been expanding outside the US, where a majority of the company’s paid subscriptions now come from.

As a result, the streaming service said it’s going to invest in more non-English original content, especially for big markets including India.

In total, Netflix has budgeted about £8 billion this year to spend on developing content. The goal is to eventually produce more feature films and TV shows so that the company can phase out its licensing deals with big studios such as Disney and WarnerMedia, which are developing they’re own streaming services.

Despite the weak quarter, Netflix is set to remain a clear leader in the video streaming industry, according to eMarketer. In the US, Netflix has about 147 million viewers, far more than Amazon, Hulu or HBO’s streaming service, the research company said. “Netflix has a strong slate of original content that should keep it in the forefront among streaming services, and it plans to continue outspending the competition to develop TV programming and feature films,” Verna added.

Also on the positive side: Netflix reported it’s net income for the quarter reaching £384 million, which is up from a mere £66 million a year ago.


  1. ^ letter (
  2. ^ call (
  3. ^ increases (

1 2 3 88