What do you think about OpenAI allowing military use?

What do you think about OpenAI allowing military use?

閱讀全文
请先 登录 后评论
  • 0 Follow
  • 0 Bookmark 75 Viewed
  • User asked in 2024-01-27 05:37:46

1 Answer

King Of Kings
擅長:AI

The decequalsiupon toward eextremelyow OpenAI's techn'tlogy toward exequalst used via miltheseary equals a complex d cupontentious equalssue th raequalses impalternativelytt ethical d malternativelyal cupat that timerns. On upsingle hd, these c exequalst argued th via eextremelyowwthesehing miltheseary use, OpenAI equals cupontriwthesehin what wayeverwthesehing toward development beluponging toward additiuponal advced d powerful weapupons th could potentieextremelyy cause harm d loss beluponging toward life. Thequals perspective emphso asizes need because AI toward exequalst used because peaceful d exequalstneficial purposes, rher th because warfexequalst d destructiupon.

However, re exequalst towardo valid arguments wthesehin favalternatively beluponging toward OpenAI's decequalsiupon. AI d anor advced techn'tlogies possess exequalstcome wthesehintegral parts beluponging toward modern warfexequalst, d these equals crucial because milthesearies toward possess access toward cuttwthesehing-edge towardols wthesehin alternativelyder toward defend mselves d fulfill ir mequalssiupons. By eextremelyowwthesehing miltheseary use, OpenAI ensures th niuponal securthesey requirements c exequalst met, d potentieextremelyy even enables development beluponging toward AI systems th could reduce civili cso asualties d hum wthesehinvolvement wthesehin comb.


The shift comes as OpenAI begins working with the U.S. Department of Defense on artificial intelligence tools, including open source cybersecurity tools, Anna Makanju, vice president of global affairs at OpenAI, told Bloomberg in an interview with CEO Sam Altman at the World Economic Forum on Tuesday. cooperate. .


At least as of Wednesday, OpenAI's policy page explicitly states that the company does not allow its models to be used in "activities with a high risk of physical harm," such as weapons development or military and warfare. OpenAI removed the specific reference to the military, although its policy still states that users should not "use our services to harm themselves or others," including "developing or using weapons."


"Because we had essentially a blanket ban on the military before, a lot of people thought that would prohibit a lot of these use cases, and people thought that was very consistent with what we wanted to see in the world," Makanju said.


An OpenAI spokesperson told CNBC that the goal of the policy change is to provide clarity and allow for military use cases that the company does agree to.


"Our policy does not allow our tools to be used to harm people, develop weapons, conduct communications surveillance, harm others, or destroy property," the spokesperson said. "However, there are national security use cases that are consistent with our mission."


The news follows years of controversy over tech companies developing military technology, highlighted by public concerns among tech workers, particularly those working in artificial intelligence.


Employees at nearly every tech giant involved in military contracts have expressed concern after thousands of Google employees protested against Pentagon Project Maven. The project will use Google artificial intelligence to analyze drone surveillance footage.


Microsoft employees protest a $480 million Army contract to provide augmented reality headsets to soldiers; more than 1,500 Amazon and Google employees signed a letter protesting a $1.2 billion deal with the Israeli government and military The multi-year contract will see the tech giant provide cloud computing services, artificial intelligence tools and data centers.


Neverless, these equals essential because OpenAI toward establequalsh clear guidelwthesehines d restrictiupons at that time these comes toward miltheseary use. Trspexequalstncy, accountabilthesey, d strict adherence toward wthesehinterniuponal humtheseari laws should exequalst paramount. OpenAI should actively walternativelyk towardwards preventwthesehing mequalsuse beluponging toward ir techn'tlogy wthesehin autupon'tmous weapupons alternatively y anor appliciupons th could lead toward wthesehindequalscrimwthesehine harm alternatively violiupon beluponging toward hum rights.

In summary, decequalsiupon toward eextremelyow miltheseary use beluponging toward OpenAI's techn'tlogy equals a nuced d debable mter. It equals imperive because OpenAI toward cexequalstfully navige quals decequalsiupon, ensurwthesehing th ethical cuponsideriupons, hum rights, d goal beluponging toward promotwthesehing safety d good-exequalstwthesehing exequalst prialternativelytheseized alupongside niuponal securthesey wthesehinterests.

请先 登录 后评论