亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        社媒推送因人而異,驅(qū)動政治兩極分化

        2020-10-09 11:18:55羅伯特·艾略特·史密斯
        英語世界 2020年9期
        關(guān)鍵詞:人工智能

        羅伯特·艾略特·史密斯

        The election season is winding up, and my social media is once again awash with political stories. Headlines stream: “Warren and Bernies awkward truce...”, “Trump sees his base growing...” and “The Feds real message...”. This is the America I see today.

        The trouble is, its not the America you see or anyone else sees. It is my personally-curated version of reality. A constantly shifting mirage, evolving in real-time, depending on my likes and dislikes, what I click on, and what I share.

        A recent Pew Research Center study found black social media users are more likely to see race-related news. The Mueller report suggests Russian efforts against Hillary Clinton targeted Bernie Sanders supporters. In October 2016, Brad Parscale, then President Trumps 2016 digital director, told Bloomberg News that he targeted Facebook and media posts at possible Clinton supporters so that they would sit the election out.

        Parscale―who, as of early August, has spent more ($9.2 million) on Facebook ads for Trump 2020 than the four top Democratic candidates combined―said that in 2016 he typically ran 50,000 ad variations each day, micro-targeting different segments of the electorate.

        Algorithms are prejudiced

        While political operatives exploiting yellow journalism is nothing new, the coupling of their manipulative techniques to a technologically-driven world is a substantial change. Algorithms are now the most powerful curators of information, whose actions enable such manipulation by creating our fractured informational multiverse.

        And those algorithms are prejudiced. That may sound extreme, but let me explain.

        In analyses conducted by myself and colleagues at University College London (UCL), we modeled the behavior of social networks, using binary signals (1s and 0s) passed between simplified “agents” that represented people sharing of opinions about a divisive issue (say pro-life versus pro-choice or the merits of building a wall or not).

        Most “agents” in this model determine the signals they broadcast based on the signals they receive from those surrounding them (as we do sharing news and stories online). But we added in a small number of agents we called “motivated reasoners,” who, regardless of what they hear, only broadcast their own pre-determined opinion.

        Our results showed that in every case, motivated reasoners came to dominate the conversation, driving all other agents to fixed opinions, thus polarizing the network. This suggests that “echo chambers” are an inevitable consequence of social networks that include motivated reasoners.

        It goes deeper than you think: Two years after Charlottesville1, Im fighting the conspiracy theory industrial complex.

        So who are these motivated reasoners? You might assume they are political campaigners, lobbyists or even just your most dogmatic Facebook friend. But, in reality, the most motivated reasoners online are the algorithms that curate our online news.

        How technology generalizes

        In the online media economy, the artificial intelligence in algorithms are single-minded in achieving their profit-driven agendas by ensuring the maximum frequency of human interaction by getting the user to click on an advertisement. But AIs are not only economically single-minded, they are also statistically simple-minded.

        Take, for example, the 2016 story in The Guardian about Google searches for “unprofessional hair” returning images predominantly of black women.

        Does this reveal a deep social bias towards racism and sexism? To conclude this, one would have to believe that people are using the term “unprofessional hair” in close correlation with images of black women to such an extent as to suggest most people feel their hairstyles define “unprofessional.” Regardless of societal bias (which certainly exists), this seems doubtful.

        It isnt all bad news for newspapers: Im a journalism student in an era of closing newsrooms, ‘fake news. But I still want in.

        Having worked in AI for 30 years, I know it is probably more statistically reliable for algorithms to recognize black womens hairstyles than those of black men, white women, etc. This is simply an aspect of how algorithms “see,” by using overall features of color, shape, and size. Just as with real-world racism, resorting to simple features is easier for algorithms than deriving any real understanding of people. AIs codify this effect.

        To be prejudiced means to pre-judge on simplified features, and then draw generalizations from those assumptions. This process is precisely what algorithms do technically. It is how they parse the incomprehensible “Big Data” from our online interactions into something digestible. AI engineers like me explicitly program generalization as a goal of the algorithms we design.

        Given the simplifying features that algorithms use (gender, race, political persuasion, religion, age, etc.) and the statistical generalizations they draw, the real-life consequence is informational segregation, not unlike previous racial and social segregation.

        Dangerous, divisive consequences

        Groups striving for economic and political power will inevitably exploit these divisions, using techniques such as targeted marketing and digital gerrymandering to categorize groups. The consequence is not merely the outcome in an election, but the propagation of deep divisions in the real world we inhabit.

        Recently, Sen. Kamala Harris spoke about how federally-mandated desegregation busing transformed her life opportunities. Like her, I benefited from that conscious effort to mix segregated communities, when as a child in 1970s Birmingham, Alabama, black children were bused to my all white elementary school. Those first real interactions I had with children of a different race radically altered my perspective of the world.

        It never gets easier: How many more birthdays will our journalist son, Austin Tice2, spend captive in Syria?

        The busing of the past ought now inspire efforts to overcome the digital segregation we see today. Our studies at UCL indicate that the key to counteracting the natural tendency of algorithmically-mediated social networks to segregate is to technically promote mixing of ideas, through greater informational connectivity between people.

        Practically, this may mean the regulation of online media, and an imperative for AI engineers to design algorithms around new principles that balance optimization with the promotion of diverse ideas. This scientific shift in perspective will ensure a healthier mix of information, particularly around polarizing issues, just like those buses enabled racial and social mixing in my youth.

        選舉季行將結(jié)束,我的社交媒體則再次充斥著政壇故事。新聞頭條:“沃倫和伯尼的尷尬休戰(zhàn)……”“特朗普喜看基本盤擴張……”“美聯(lián)儲的真實訊息……”。這就是我今天看到的美國。

        問題是,這并非您看到的美國或任何其他人看到的美國。這是我的個人定制版現(xiàn)實。一幅不斷移動的海市蜃樓圖,根據(jù)我的贊和踩、點擊及分享而實時演化。

        皮尤研究中心最近的研究發(fā)現(xiàn),黑人社交媒體用戶更有可能看到種族相關(guān)的新聞。穆勒報告表明,俄羅斯人搞的反希拉里·克林頓動作的對象是伯尼·桑德斯的支持者。2016年10月,布拉德·帕斯卡爾——時任特朗普2016年總統(tǒng)競選數(shù)字總監(jiān)——向彭博新聞社透露,他以克林頓的潛在支持者為其在臉書和媒體帖子的受眾,以便這些人選舉時不去投票。

        截至8月初,帕斯卡爾為特朗普2020年競選在臉書廣告上的花費(920萬美元)比4名民主黨支持率最高候選人的總和還多。他說,2016年,他通常每天投放5萬個廣告變體,精準定位不同的選民群體。

        算法有偏見

        盡管利用小報新聞的政治特工并非新鮮事物,但將他們的操縱技術(shù)與技術(shù)驅(qū)動的世界相結(jié)合卻是實質(zhì)巨變。算法乃現(xiàn)時最強大的信息管理員,通過創(chuàng)建破碎的信息多重宇宙,使這種操縱成為可能。

        而且這些算法帶有偏見。聽起來可能有些極端,請容我解釋。

        我本人和倫敦大學(xué)學(xué)院的同事進行了多項分析。研究中,我們使用經(jīng)簡化的“代理人”之間傳遞的二進制信號(1和0)對社交網(wǎng)絡(luò)的行為建模,這些信號代表人們就分歧問題發(fā)表意見(例如,生存優(yōu)先還是選擇優(yōu)先,建墻不建墻到底哪個好)。

        模型中,大多數(shù)“代理人”都是根據(jù)從周圍人那里收到的信號來確定他們廣播的信號(恰如我們在線分享新聞和故事時的行為)。但是,我們添加了少數(shù)稱為“有動機的推理者”的代理人,他們無論聽到什么都只會發(fā)表自己預(yù)設(shè)的意見。

        我們的研究結(jié)果表明,在每種情況下,有動機的推理者最終都會主導(dǎo)對話,將所有其他代理人推向固定的觀點,從而使網(wǎng)絡(luò)兩極化。這表明,只要社交網(wǎng)絡(luò)存在有動機的推理者,“回聲室”就是必然結(jié)果。

        事情比您想得更深:夏洛茨維爾事件兩年后,我還在與陰謀論產(chǎn)業(yè)復(fù)合體斗爭。

        那么,這些有動機的推理者究竟為何人?讀者可能會認為是政治活動家、說客乃至其最自以為是的臉書好友。但實際上,網(wǎng)上有著最強動機的推理者是管理我們在線新聞的算法。

        技術(shù)如何概括

        在線媒體經(jīng)濟中,算法的人工智能一心一意通過讓用戶點擊廣告來確保最高頻率的人機交互,從而實現(xiàn)其以利潤為導(dǎo)向的議程。但是,人工智能不僅在經(jīng)濟上一心一意,在統(tǒng)計上也是一心一意。

        以2016年《衛(wèi)報》中有關(guān)谷歌搜索“不專業(yè)發(fā)型”的故事為例,反饋的圖像主要是黑人女性。

        這是否揭示出趨向種族主義和性別歧視的某種深層的社會偏見?要得出這個結(jié)論,必須得相信人們使用的“不專業(yè)的發(fā)型”一詞與黑人女性的形象密切相關(guān),以至暗示大多數(shù)人認為她們的發(fā)型定義了何為“不專業(yè)”。拋開社會偏見(確實存在)不談,這似乎是可疑的。

        對報紙而言,這并非全然是壞消息:我是一個生活在新聞編輯室日漸關(guān)閉(“假新聞”)時代的新聞專業(yè)學(xué)生。但我還是希望入局。

        在人工智能領(lǐng)域工作了30年,我明白算法在識別黑人女性發(fā)型上可能要比識別黑人男性、白人女性等人群的發(fā)型在統(tǒng)計學(xué)上更為靠譜。這只是算法的一個方面,即使用顏色、形狀和大小這些整體特征來“觀看”。恰如現(xiàn)實世界中的種族主義,對于算法而言,訴諸簡單特征要比真正理解人容易許多。人工智能將這種效應(yīng)程序化。

        帶有偏見意味著基于簡化的特征進行預(yù)判,并將此類假設(shè)進行概括。這個過程正是算法在技術(shù)上所做的。這是他們將在線交流中無法理解的“大數(shù)據(jù)”解讀為可消化內(nèi)容的過程。對我這樣的人工智能工程師而言,很明確,將這種概括設(shè)定為我們所設(shè)計的算法的一個目標。

        鑒于算法使用的簡化特征(性別、種族、政治立場、宗教、年齡等)以及它們得出的統(tǒng)計概括,現(xiàn)實生活所受到的影響就是信息隔離,與以往的種族隔離和社會隔離并無二致。

        危險且分裂性的后果

        旨在攫取經(jīng)濟和政治權(quán)力的團體將無可避免地利用這種細分,使用定向營銷和不正當?shù)臄?shù)字劃分等技術(shù)將團體歸類。這種做法不僅影響個別選舉的結(jié)果,還在我們所處的現(xiàn)實世界中散播深層次的分裂。

        參議員卡瑪拉·哈里斯近期曾談及聯(lián)邦政府強制實行的廢除種族隔離校車制度如何改變了她的人生機遇。筆者兒時生活在1970年代的亞拉巴馬州伯明翰,當黑人兒童乘校車來到我所在的全白人小學(xué)時,和哈里斯一樣,我也從有意識消除種族隔離社區(qū)的努力中得益。那些與來自另一種族的孩子們最初的真正互動,從根本上刷新了我的世界觀。

        事情從來就不容易:不知道我們的記者小伙兒奧斯汀·蒂斯還有多少個生日將在敘利亞的囚禁中度過?

        過往的廢除種族隔離校車制度理當激發(fā)我們現(xiàn)在去克服今日所見的數(shù)字隔離。我們在倫敦大學(xué)學(xué)院的研究表明,要抵抗算法調(diào)制的社交網(wǎng)絡(luò)隔離的自然趨勢,關(guān)鍵在于通過人與人之間更強的信息互聯(lián)來從技術(shù)上促進觀念的交融。

        實際上,這可能意味著對在線媒體的監(jiān)管,以及要求人工智能工程師圍繞新原則設(shè)計算法,這些原則應(yīng)當在最優(yōu)結(jié)果與多元觀念推廣之間達致平衡??茖W(xué)轉(zhuǎn)變視角將確保更健康的信息融合,尤其事關(guān)兩極分化的問題,恰如那些在我青少年時代實現(xiàn)了種族和社會融合的校車一樣。

        猜你喜歡
        人工智能
        我校新增“人工智能”本科專業(yè)
        用“小AI”解決人工智能的“大”煩惱
        汽車零部件(2020年3期)2020-03-27 05:30:20
        當人工智能遇見再制造
        2019:人工智能
        商界(2019年12期)2019-01-03 06:59:05
        AI人工智能解疑答問
        人工智能與就業(yè)
        基于人工智能的電力系統(tǒng)自動化控制
        人工智能,來了
        數(shù)讀人工智能
        小康(2017年16期)2017-06-07 09:00:59
        人工智能來了
        午夜视频在线观看国产19| 午夜性刺激免费视频| 亚洲AV无码未成人网站久久精品 | 国产成人久久777777| 91精品91| 日韩毛片在线| 亚洲中文字幕日产喷水| 手机在线免费观看的av| 精品人妻久久一日二个| 成人免费无遮挡在线播放| 欧美熟妇精品一区二区三区| 色噜噜狠狠色综合欧洲| 蜜桃网站入口可看18禁| 亚洲精品久久激情国产片| 成人看片黄a免费看那个网址| 一区五码在线| 日韩国产自拍视频在线观看| 亚洲国产精品日本无码网站| 国产成人av性色在线影院色戒| 国产精品1区2区| 亚洲一区二区三区18| 日韩中文字幕一区二区二区| 国产99在线 | 亚洲| 国产女人成人精品视频| 亚洲精彩视频一区二区| 青青草免费在线爽视频| 亚洲国产精品毛片av不卡在线| 奇米狠狠色| 亚洲国产成人精品一区刚刚| 可以免费看亚洲av的网站| 国产乱妇乱子在线播视频播放网站 | 激情综合五月开心婷婷| 少妇做爰免费视频网站| www.91久久| 国产视频激情视频在线观看| 国产人妻鲁鲁一区二区| 国产成人精品日本亚洲| 国语憿情少妇无码av| 亚洲国产中文字幕一区| 性色av无码中文av有码vr| 久久久综合九色合综国产|