Tag Archives: Uncategorized

Passwords Are Terrible (Surprising No One)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/passwords-are-terrible-surprising-no-one.html

This is the result of a security audit:

More than a fifth of the passwords protecting network accounts at the US Department of the Interior—including Password1234, Password1234!, and ChangeItN0w!—were weak enough to be cracked using standard methods, a recently published security audit of the agency found.

[…]

The results weren’t encouraging. In all, the auditors cracked 18,174—or 21 percent—­of the 85,944 cryptographic hashes they tested; 288 of the affected accounts had elevated privileges, and 362 of them belonged to senior government employees. In the first 90 minutes of testing, auditors cracked the hashes for 16 percent of the department’s user accounts.

The audit uncovered another security weakness—the failure to consistently implement multi-factor authentication (MFA). The failure extended to 25—­or 89 percent—­of 28 high-value assets (HVAs), which, when breached, have the potential to severely impact agency operations.

Original story:

To make their point, the watchdog spent less than $15,000 on building a password-cracking rig—a setup of a high-performance computer or several chained together ­- with the computing power designed to take on complex mathematical tasks, like recovering hashed passwords. Within the first 90 minutes, the watchdog was able to recover nearly 14,000 employee passwords, or about 16% of all department accounts, including passwords like ‘Polar_bear65’ and ‘Nationalparks2014!’.

Какво представлява войната

Post Syndicated from original http://www.gatchev.info/blog/?p=2529

Миша е войник в украинската армия. Истинското му име е друго. Всякакви идентифициращи подробности, които съм забелязал, също съм променил. Когато и ако преценя, че вече не е опасно за него, ще кажа истинските и откъде го познавам.

В армията е от около 6 месеца. Воюва при Бахмут. Понякога праща е-майли, по три-четири реда, вероятно в нарушение на забрана. От около месец и нещо зачестиха – вместо по един на месец са към по два на седмица, и значително по-дълги. Има нужда да сподели.

И какво да сподели. Много неща – разсъждения за Русия и Украйна, как са се променяли позициите му (етнически руснак е). Кой как воюва, кога какво се случило. От най-общочовешки до най-обикновени и човешки неща. Но темата, която набъбна напоследък, е за мястото на човека – и конкретно на Миша – във войната.

Разказът му е потрисащ. Впечатли ме по-силно от един ужасяващ руски филм – „Иди и виж“. Може би с това, че е не филм, а реалността. Трудно ми е да повярвам, че е писан от момче на няма трийсет. И не искам да го вярвам, за да не мисля какво е изстрадал.

От седмица-две обмислям да преведа и пусна тук писмата му. Не всичко, само темата за човека и войната. Спираше ме, че са несвързани, може би писани на парчета. Накрая реших – ще извадя и подредя темата в общ разказ, като запазя непроменено казаното. Без детайлите няма да има ударната сила на оригиналните писма – нищо, все е нещо. И без най-драстичните неща, твърде тежки са… Мисля, че който иска правото да се нарича човек, трябва да го прочете.

И да го почувства.


Наборник съм. Като ме пратиха в Бахмут, се уплаших. Знаех колко е тежко там. После ми стана все едно. Две смърти няма, вечен живот също.

Допреди ноември също беше тежко. Орките не икономисват артилерията, на много места вече даже руини не стърчат. Както и да се пазим, се случват ранени и загинали. Закусвате заедно, а вечерта човекът го няма. Завинаги. Иде ти да виеш. Теб да убият, после няма да боли. А за другите боли и не спира. И ти го познаваш от месеци – а децата му, жена му, майка му, на тях какво им е?

Оказва се, това не е нищо. Дотогава почти не виждах реален, жив враг. Веднъж на два-три дни видиш нещо да помръдне откъм позициите им, дадеш откос някъде натам и толкова. Колкото да не е без хич. Надали съм и одраскал някого. Не е реално да улучиш. И слава богу. И орките имат деца, жени, майки. Даже вагнеровците, срещу нас са те. Лозунгите „да умрем за Родината-майка“ и „смърт на нацистите укри“ са оркска кожа, а в нея са заключени хора като мен и другарите ми. Обелиш ли я, това отвътре ще си спомни, че е човек. Дресьорите му за назгули може и да стават, ама до Моргот им е далече. Засега.

Разбрах какво е ужас, откакто отсреща пристигнаха наборниците. Изкарват ги на предната линия, по няколко десетки души, и командват „Атакувай!“. Забави ли се някой, опита ли да се върне, вагнеровците отзад стрелят по него.

Много наборници са така паникьосани, че дори не стрелят. Не че ще ни уцелят, укритията и окопите ни са отлични. Почти не се е случвало да пуснат пред тях БТР или танк да ги пази поне малко. Персоналните им брони са тенекийки, даже пистолет ги пробива от 200 метра, пробвали сме на трофейни. Още първият куршум ги сваля…

Обикновено насреща е младо момче. Тича към теб, често даже забравил да стреля. Можеш през прицела да видиш очите му. И ужаса в тях. Неговия, и още повече тоя на близките му, на всеки, на когото е скъп. Него ще престане да го боли, а тях ще ги боли завинаги. И той винаги прилича на някой от другарите ти, или на някой друг познат. Или на оня в огледалото. Видиш ли го, разбираш – и ти ще го помниш завинаги. Ще идва нощем, да те пита защо отне бащата на децата му и детето на майка му. Или ще идват децата му или майка му, да те питат те. Мъртвите ти другари ги е убил някой орк, дошъл тук да убива, да отмъщава за това, че живее по оркски, некрасиво. А това момче ще го убиеш ти, човекът.

И после натискаш спусъка.

И той полита и рухва. Виждаш през прицела как пръстите му ровят земята за последно.

И усещаш в себе си писък. На изчезналите в миг десетки хиляди дни, които са го чакали, никой не идеален, но всеки щастлив. На децата му, които вече никога няма да се родят. На вече родените, останали без баща завинаги. На осиротелите му съпруга и родители, братя и сестри, приятели. На кучето му, които никога повече няма да бъде помилвано от господаря си. Всичкото това щастие, което си изличил с мръдване на пръста. И всичкото добро – защото и в най-злия има добро, може би малко, но за цял живот огромно.

И ти се иска да хвърлиш калашника и да паднеш на колене. И да има някой, който да може да ти прости. За твоя си избор, за който не заслужаваш прошка от никого и никога няма да я заслужиш. Преди войната нямаше да го разбера, щях да считам за луд който мисли така. Сега считам за луди тези, които не мислят така.

А не можеш. Отсреща тичат още. И в прицела виждаш лицето на следващия и очите му, и ужаса в тях на всички, на които е скъп.

И после пак натискаш спусъка, и писък в теб пак те сгърчва.

И после пак. И пак. И пак.

Докато ничията земя не бъде разчистена. Малкото неударени лежат сред убитите, правят се на умрели. Падне ли нощта, ще допълзят обратно до прикритието си. Или до твоето, ако сбъркат посоката. Или може би ако не я сбъркат. Към през ден се случва по някой. Всички са в ступор. Не личи да разбират, че са при „врага“. Вече знаем – падне ли нощта, навярно ще видим в термален прицел някой да пълзи към нас. Без нито да ни стреля, нито да се пази. Стигне ли до окопа ни, просто рухва в него и лежи, все му е едно, че към него идват „врагове“, не прави нищо. Взимаме му оръжието, ако не го е изгубил вече, и някой го кара в тила. Не създава проблеми.

Добре, че са те и това да ги пощадим. Дават искрица надеждица за спасение сред кошмара. Без тях бих полудял.

От дванайсет души отделение през август останахме девет. Двама стрелят по орките, без да им пука. Просто си пазят страната, нищо лично. Все едно ходят на работа. Сигурно са прави. Превземат ли място орките, животът там е russian roulette. Населението може да няма особени проблеми, а може и да е като в концлагер.

А един направо оргазмира, като улучи орк. Казва, майка му и сестра му загинали в Мариупол. Щял да спре да убива орки когато те двете възкръснат. Може би е истина и от това да е откачил. Може да лъже, да е психопат с оправдание. Не го познавам отпреди, не знам.

Останалите сме на ръба на побъркването. Говорили сме много пъти, знам го. Не от страха за себе си. От ужаса колко хора сме убили, в какво сме се превърнали. Оправдания много и всякакви, като лайна са, всеки може да ги изсере, винаги смърдят и никога нищо не скриват.

Падне ли първата вълна, отсреща пращат следващата. Още момчета на смърт. И убиването започва пак, трупа още писъци в душата ти. С които няма свикване. И не трябва да има. Убивал ли си, няма значение виновни или невинни, да си носиш наказанието е единствената останала ти връзка с човешкото.

Стреляме. И се молим да няма трета вълна. Не знаем ще издържим ли да не хвърлим оръжието и да излезем от укритията, за да ни убият, за да престане най-сетне да ни боли.

Понякога има трета вълна. Стискаме зъби и стреляме. Вече знаем – четвърта никога не е имало, ако вагнеровците не са тръгнали след втората, ще тръгнат след тази. С надеждата да сме попривършили боеприпасите. Или може би да не сме издържали на убиването и да излезем да ни стрелят. На други участъци чувам, че се стига и до ръкопашен бой. На нашия още не е, въпреки че на няколко пъти стигаха на по трийсетина метра от нас. Усетят ли, че не жалим боеприпасите, се отказват и отстъпват. По тях не стрелят отзад. Засега.

Те са другото, което ни крепи. Назначили сме ги за зло. Има защо. Във всяка вълна наборници поне двама-трима, уплашили се и спрели или отстъпили, ги убиват те. Знаеш ли колко добре личи дали ударът от куршума идва отпред или отзад? Като го удариш отпред, куршумът отхвърля тялото му назад и той се присвива напред, около него. Като го ударят отзад, куршумът блъсва тялото му напред, той се изпъчва и ръцете и главата му отлитат назад. Даже от двеста метра се вижда чудесно.

Оркската кожа на вагнеровците е по-дебела, почнеш ли да я белиш, трябва повече сила и ще откъсваш парчета месо с нея. При по-опръстенен назгул са. Но отвътре пак ще остане достатъчно човек, с деца, родители, мечти. Понякога с вкус към музика – подиграваме им се за гаврата с класиката, но даже претензията да я харесват все е вратичка към човешкото. А и вътре в нас не е важно какви са те наистина, важно е за какви сме ги назначили ние. Първото значи разни неща за тях. Второто значи неща за нас. Тези, които ни правят хора. Или орки.

Затова ги смятаме за зло. За да се мислим за добри. Въпреки че сме масови убийци, вътре във всеки от нас ехтят писъци с десетки. Не знам затворници ли са срещу нас, все едно, и затворниците също са хора. Но мисълта, че воюваме срещу нещо по-зло и от нас ни крепи. Не зная дали без това нямаше да сме си теглили куршума, за да спрат най-сетне писъците.

Отстъпят ли вагнеровците, атаката спира. Втора същия ден почти не се е случвала, поне на нашия участък. Близо до позицията ни има санитарен пункт – постъмни ли се, санитарите запълзяват по бойното поле, търсят още живи, да ги превържат и измъкнат. Нищо, че наши там няма. Шефът на пункта казва, че някои от спасените били благодарни, било за тях начин да се предадат, без техните да пострадат, разказвали оперативна информация, затова командването не било против да ги извличат. Не знам истина ли е, но се спасяват животи. Понякога и от нас излизат с тях, и аз съм ходил, два пъти изнасях ранен. Не ти пука дали няма да те гръмнат от отсреща. Нито дали спасен ще каже нещо важно. Нито дали ще е благодарен, или ще те проклина. Човекът в теб оцелява от това.

Чувал съм, че на други участъци се е случвало орките да стрелят по санитарите ни. Не знам дали е истина. На нашия участък не се е случвало. Сигурно ги виждат, няма как да нямат термовизьори. И ние често виждаме техни санитари да пълзят нощем. И през ум не ни е минавало да ги стреляме, даже на онзи с майката и сестрата в Мариупол. Санитарите казват, че понякога са се срещали с отсрещни санитари насред полето. Че няма враждебност. Че споделят, че ако някой няма нещо, а другият го има в повече. Че орките предпочитат да си измъкнат ранените към тях, но ако е по-лесно или спешно да ги измъкнем към нас, често ни ги оставят. Оръжие и боеприпаси не ни стигат, но медицински материали май имаме повече и по-добри.

И сред тях трябва да има всякакви. И сигурно също повечето се плашат от убиването повече, отколкото от смъртта. Хора като мен. Ако е истина една десета от това, дето го разказват воювалите в Мариупол и Северодонецк, трябва да ги мразя до смърт. Да се радвам, като ги убивам. А навярно е истина всичкото, нали Бахмут е пред очите ми. Разкажа ли ти какво са видели, месеци няма да спиш нощем. Но се мъча да не мразя орките. Да мисля, че са хора. Дори вагнеровците – зли, но хора. Отстъпя ли, поддам ли, ще стана като другаря ми със загиналите майка и сестра. Побъркана машина за убиване и нищо друго. Неспособна да бъде човек, никога вече.

Може и да съм станал вече, и затова още да не съм се хвърлил върху куршумите. Право нямам, направим ли го, орките ще стигнат и до моя град, и той също ще се обърне на Бахмут, купища руини със стърчащи изпод тях ръце и крака на трупове. Щом стрелят и по своите си… И ден след ден убиваме още и още момци като нас, трупаме планина от тях, почерняме не ща да мисля още колко живи. Убиваме всъщност себе си. За да не убием близките и скъпите си. Война – умираш, за да спасиш тях. От това да умрат, и още повече от това да убиват. Звучи толкова лесно.

А не е. Утре пак ще има атака, рядко минава ден без нея. И в прицела пак ще виждаш лицата на скъпите си и своето, ще натискаш спусъка и те ще рухват мъртви. Всъщност други и скъпи на другиго – има ли значение? Сигурно и сред тях е пълно с ужасени от смъртта, дето са я посели – това си е техен товар, аз имам моя и той ме смачква.

В такива случаи казват – дано се срещнем и прегърнем, когато войната свърши. Не искам да те срещам и прегръщам. Не искам да срещам и прегръщам никой, който не е убивал. Ще го оскверня. Искам, ако съм жив още когато войната свърши, когато вече няма нужда да защитавам близките си, да ме застрелят. За да спрат писъците отвътре. Орките лъжат, че сме един народ – вярно е. Каквото е живо и има чувства, всичкото е един народ, и момчетата дето ги стрелям също. В медсанвъзела си гледат куче, откъснат при обстрел заден крак, едва го спасиха – и то. И мишките в укритието ни, дето за теб са паразити и ще им подхвърлиш отрова, те също. Винаги оставяме трохи и за тях, докато ни има ще го правим. Храним последните искрици човек в нас.

Войната не е страшна с това, че ще те убият. Страшна е с това, че ти убиваш. И че няма как да спреш, иначе ще убият тези, които са ти скъпи. Жертваш се заради тях, по-страшно от смърт. Иначе те ще да трябва да правят тази жертва.

Ransomware Payments Are Down

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/ransomware-payments-are-down.html

Chainalysis reports that worldwide ransomware payments were down in 2022.

Ransomware attackers extorted at least $456.8 million from victims in 2022, down from $765.6 million the year before.

As always, we have to caveat these findings by noting that the true totals are much higher, as there are cryptocurrency addresses controlled by ransomware attackers that have yet to be identified on the blockchain and incorporated into our data. When we published last year’s version of this report, for example, we had only identified $602 million in ransomware payments in 2021. Still, the trend is clear: Ransomware payments are significantly down.

However, that doesn’t mean attacks are down, or at least not as much as the drastic drop-off in payments would suggest. Instead, we believe that much of the decline is due to victim organizations increasingly refusing to pay ransomware attackers.

NIST Is Updating Its Cybersecurity Framework

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/nist-is-updating-its-cybersecurity-framework.html

NIST is planning a significant update of its Cybersecurity Framework. At this point, it’s asking for feedback and comments to its concept paper.

  1. Do the proposed changes reflect the current cybersecurity landscape (standards, risks, and technologies)?
  2. Are the proposed changes sufficient and appropriate? Are there other elements that should be considered under each area?
  3. Do the proposed changes support different use cases in various sectors, types, and sizes of organizations (and with varied capabilities, resources, and technologies)?
  4. Are there additional changes not covered here that should be considered?
  5. For those using CSF 1.1, would the proposed changes affect continued adoption of the Framework, and how so?
  6. For those not using the Framework, would the proposed changes affect the potential use of the Framework?

The NIST Cybersecurity Framework has turned out to be an excellent resource. If you use it at all, please help with version 2.0.

Kevin Mitnick Hacked California Law in 1983

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/kevin-mitnick-hacked-california-law-in-1983.html

Early in his career, Kevin Mitnick successfully hacked California law. He told me the story when he heard about my new book, which he partially recounts his 2012 book, Ghost in the Wires.

The setup is that he just discovered that there’s warrant for his arrest by the California Youth Authority, and he’s trying to figure out if there’s any way out of it.

As soon as I was settled, I looked in the Yellow Pages for the nearest law school, and spent the next few days and evenings there poring over the Welfare and Institutions Code, but without much hope.

Still, hey, “Where there’s a will…” I found a provision that said that for a nonviolent crime, the jurisdiction of the Juvenile Court expired either when the defendant turned twenty-one or two years after the commitment date, whichever occurred later. For me, that would mean two years from February 1983, when I had been sentenced to the three years and eight months.

Scratch, scratch. A little arithmetic told me that this would occur in about four months. I thought, What if I just disappear until their jurisdiction ends?

This was the Southwestern Law School in Los Angeles. This was a lot of manual research—no search engines in those days. He researched the relevant statutes, and case law that interpreted those statutes. He made copies of everything to hand to his attorney.

I called my attorney to try out the idea on him. His response sounded testy: “You’re absolutely wrong. It’s a fundamental principle of law that if a defendant disappears when there’s a warrant out for him, the time limit is tolled until he’s found, even if it’s years later.”

And he added, “You have to stop playing lawyer. I’m the lawyer. Let me do my job.”

I pleaded with him to look into it, which annoyed him, but he finally agreed. When I called back two days later, he had talked to my Parole Officer, Melvin Boyer, the compassionate guy who had gotten me transferred out of the dangerous jungle at LA County Jail. Boyer had told him, “Kevin is right. If he disappears until February 1985, there’ll be nothing we can do. At that point the warrant will expire, and he’ll be off the hook.”

So he moved to Northern California and lived under an assumed name for four months.

What’s interesting to me is how he approaches legal code in the same way a hacker approaches computer code: pouring over the details, looking for a bug—a mistake—leading to an exploitable vulnerability. And this was in the days before you could do any research online. He’s spending days in the law school library.

This is exactly the sort of thing I am writing about in A Hacker’s Mind. Legal code isn’t the same as computer code, but it’s a series of rules with inputs and outputs. And just like computer code, legal code has bugs. And some of those bugs are also vulnerabilities. And some of those vulnerabilities can be exploited—just as Mitnick learned.

Mitnick was a hacker. His attorney was not.

Backblaze vs. Dropbox: Backing Up Our Backup Claims

Post Syndicated from Stephanie Doyle original https://www.backblaze.com/blog/backblaze-vs-dropbox-backing-up-our-backup-claims/


If you follow the Backblaze blog, you’ve likely come across some of our “How to Back Up Your Life” posts. We’re interested in helping you, our readers, design the best backup plan for your needs, regardless of what your setup is, what social networks you’re on, or if you’re on a Mac or a PC.

Of course, Dropbox has shown up in that content. We have several articles talking about the best ways to integrate with their platform, and some articles that just talk about how to deal with the differences between sync and backup.

Recently, we heard that Dropbox released a backup product and wrote an article comparing our two services. (We’re flattered that they consider Backblaze to be the gold standard to compare to!) We thought we’d take this opportunity to respond, mostly because we want our library of guides to include their new offering, and a little bit because, well, there were some interesting interpretations included in the article.

Without further ado, our thoughts on the differences between Backblaze and Dropbox backup.

Backup vs. Sync

Dropbox started out as a syncing service, which, as we’ve noted before, is not the same as a backup service. When you’re using a sync service, you can easily delete or change a file, save it, and then lose the one you actually wanted to keep. This is one of the big reasons you should back up, even if your files are synced.

Over the past several years, Dropbox has been expanding their offerings, including file transfer, document signing, and now backup. It makes a lot of sense if you want to be a leading file management system. But, does Dropbox Backup stack up as a functional, independent product—or is it more of an add-on they’re offering to their sync functionality?

A Quick Note on Citing Your Sources…

When I set out to write this article, I first wanted to see if the things Dropbox claims hold water—After all, innovation is about iteration, and you don’t change or get better if you believe your product is perfect. Maybe we could learn something.

I kept hearing about this product research they’d done:

Source: Dropbox Backup vs. Backblaze.

You know we at Backblaze love data, so I was curious—How did they collect this data? Who were these users? I couldn’t find much more information about it in the article. But, after some digging, I found this on their product page:

Source: Dropbox Backup page.

It makes sense that people who already use Dropbox would like a product similar to the one they’re paying for. But, do the rest of the claims of the article hold true?

Let’s Talk Pricing

Hey, price is definitely a part of my decision when I purchase services, and I’m sure it’s part of yours too. So, let’s get the big argument out of the way first.

Backblaze Personal Backup is $7 per month. That license includes an automatic, set-it-and-forget-it backup service, unlimited data storage, 30-day version history, and you can add one-year version history for just $2 per month or forever version history for $2 per month plus $0.005 per GB for anything over 10GB.

For argument’s sake, let’s grant that Dropbox also built a backup product that runs smoothly in the background. I haven’t personally tried it, but I’ve used Dropbox for file management, and it’s a great service.

Dropbox Backup has several tiers of payment. It’s also included in many of their other paid plans—so, in other words, if you’re already paying $12–$90+ per month for Dropbox, you can take advantage of Dropbox Backup. But, if you’re trying to purchase just Dropbox Backup, there are several tiers of licensing, and (like most SaaS companies) there are discounts for paying monthly versus yearly.

So, let’s try to compare apples to apples here. Say you only have $10 per month budgeted for your backup plan. Here’s what you’d get with Dropbox:

  • Year-long commitment – so no flexibility to cancel
  • 2,000GB data cap
  • 30-day version history

For the same $10 per month, here’s what you’d get with Backblaze:

  • Monthly commitment – flexibility to cancel
  • No data cap
  • One-year version history

For reference, in 2020 most consumers were storing around 500GB of data in their personal storage clouds, but, unsurprisingly, we store more data every year. According to experts, data storage is doubling about every four years. So, you can certainly expect those “running out of space” notifications to be pushing you to upgrade your Dropbox service, and probably sooner than you’d expect.

Speaking of Flexibility

Once you check out Dropbox’s Help docs, there are a few other things to note. Essentially, if you want to use Dropbox Backup, you have to turn off other syncing and backup services (except for OneDrive).

Source: How to Use Dropbox Backup.

In order for Dropbox Backup to work, you have to turn off iCloud and Google Backup/Sync services, both of which are super compatible with your mobile devices and which many many folks rely on (two billion Google customers can’t be wrong). And, what about business use cases? Say you’re an enterprise client who wants to work in G-Suite—Dropbox Backup is not your answer. To put it simply: Dropbox Backup works best if Dropbox is the product you also use to store your files in the cloud.

Backblaze, on the other hand, works with whatever other services you’re rocking. Many of the choices we’ve made are reflective of that, including our restoration process. Dropbox offers restoration in place—if you use Dropbox to manage your files already. Basically, when you restore in place, you’re making a change to the virtual environment of your files (their copy of your hard drive that lives in Dropbox), and then they send that back to your computer. If you use a different syncing service or are accessing a file from another device, well, you’re going through the same download/restore process as every other backup service.

Restores for All

Here’s another thing: It’s a main point in Dropbox’s article that we offer recovery via USB. They turn their noses up at delivering files via the mail—Why would you wait for that?

Well, if you’ve lived in areas with not-great internet, dealt with being the family IT hero, or have a ton of data that needs to be moved, you know that having many ways to restore is key. Sure, it’s easy to scoff at all things analog, “OMG a USB drive via the mail?!” But an external drive (in this example, a USB) comes in super handy when you’re not tech savvy or have a ton of data to move—anyone who’s had to migrate lots of files (at work or at home) knows that sometimes the internet is not as fast as moving data via external devices.

Sure, there are tech reasons rapid ingest devices matter. But these guys matter too.

And, of course, you can always restore files from the internet with your Backblaze Personal Backup account. That’s our front-line method in our Help docs, and we’ve built a Download Manager to make things more seamless for our customers. We’ve made updates to our mobile apps, and just as importantly, we offer Backblaze B2 Storage Cloud and Backblaze Business Backup products. That means that if you ever outgrow our Personal Backup services, we’ve got you covered.

To Sum Up

We’re always happy there are more backup options for consumers. A little Backblaze flame warms our hearts when we know peoples’ data is backed up. Of course, we’d love it if everyone used Backblaze, but we want people to back up their data, even if it’s with a competitor.

If you’re already a paying Dropbox user, this may be a great option for you. But, if you’re like the majority of people and need something that works, no matter where/how you store your files or what other services you use, Backblaze Personal Backup is still your easy, affordable, and proven option.

The post Backblaze vs. Dropbox: Backing Up Our Backup Claims appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

On Alec Baldwin’s Shooting

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/on-alec-baldwins-shooting.html

We recently learned that Alec Baldwin is being charged with involuntary manslaughter for his accidental shooting on a movie set. I don’t know the details of the case, nor the intricacies of the law, but I have a question about movie props.

Why was an actual gun used on the set? And why were actual bullets used on the set? Why wasn’t it a fake gun: plastic, or metal without a working barrel? Why does it have to fire blanks? Why can’t everyone just pretend, and let someone add the bang and the muzzle flash in post-production?

Movies are filled with fakery. The light sabers in Star Wars weren’t real; the lighting effects and “wooj-wooj” noises were add afterwards. The phasers in Star Trek weren’t real either. Jar Jar Binks was 100% computer generated. So were a gazillion “props” from the Harry Potter movies. Even regular, non-SF non-magical movies have special effects. They’re easy.

Why are guns different?

US Cyber Command Operations During the 2022 Midterm Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/us-cyber-command-operations-during-the-2022-midterm-elections.html

The head of both US Cyber Command and the NSA, Gen. Paul Nakasone, broadly discussed that first organization’s offensive cyber operations during the runup to the 2022 midterm elections. He didn’t name names, of course:

We did conduct operations persistently to make sure that our foreign adversaries couldn’t utilize infrastructure to impact us,” said Nakasone. “We understood how foreign adversaries utilize infrastructure throughout the world. We had that mapped pretty well. And we wanted to make sure that we took it down at key times.”

Nakasone noted that Cybercom’s national mission force, aided by NSA, followed a “campaign plan” to deprive the hackers of their tools and networks. “Rest assured,” he said. “We were doing operations well before the midterms began, and we were doing operations likely on the day of the midterms.” And they continued until the elections were certified, he said.

We know Cybercom did similar things in 2018 and 2020, and presumably will again in two years.

Bulk Surveillance of Money Transfers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/bulk-surveillance-of-money-transfers.html

Just another obscure warrantless surveillance program.

US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

You need to be a member of law enforcement with an active government email account to use the database, which is available through a publicly visible web portal. Leber told The Journal that there haven’t been any known breaches or instances of law enforcement misuse. However, Wyden noted that the surveillance program included more states and countries than previously mentioned in briefings. There have also been subpoenas for bulk money transfer data from Homeland Security Investigations (which withdrew its request after Wyden’s inquiry), the DEA and the FBI.

How is it that Arizona can be in charge of this?

Wall Street Journal podcast—with transcript—on the program. I think the original reporting was from last March, but I missed it back then.

Managing Dev Environments with Amazon CodeCatalyst

Post Syndicated from Ryan Bachman original https://aws.amazon.com/blogs/devops/managing-dev-environments-with-amazon-codecatalyst/

An Amazon CodeCatalyst Dev Environment is a cloud-based development environment that you can use in CodeCatalyst to quickly work on the code stored in the source repositories of your project. The project tools and application libraries included in your Dev Environment are defined by a devfile in the source repository of your project.

Introduction

In the previous CodeCatalyst post, Team Collaboration with Amazon CodeCatalyst, I focused on CodeCatalyst’s collaboration capabilities and how that related to The Unicorn Project’s main protaganist. At the beginning of Chapter 2, Maxine is struggling to configure her development environment. She is two days into her new job and still cannot build the application code. She has identified over 100 dependencies she is missing. The documentation is out of date and nobody seems to know where the dependencies are stored. I can sympathize with Maxine. In this post, I will focus on managing development environments to show how CodeCatalyst removes the burden of managing workload specific configurations and produces reliable on-demand development environments.

Prerequisites

If you would like to follow along with this walkthrough, you will need to:

Have an AWS Builder ID for signing in to CodeCatalyst.

Belong to a space and have the space administrator role assigned to you in that space. For more information, see Creating a space in CodeCatalystManaging members of your space, and Space administrator role.

Have an AWS account associated with your space and have the IAM role in that account. For more information about the role and role policy, see Creating a CodeCatalyst service role.

Walkthrough

As with the previous posts in our CodeCatalyst series, I am going to use the Modern Three-tier Web Application blueprint.  Blueprints provide sample code and CI/CD workflows to help make getting started easier across different combinations of programming languages and architectures. To follow along, you can re-use a project you created previously, or you can refer to a previous post that walks through creating a project using the blueprint.

One of the most difficult aspects of my time spent as a developer was finding ways to quickly contribute to a new project. Whenever I found myself working on a new project, getting to the point where I could meaningfully contribute to a project’s code base was always more difficult than writing the actual code. A major contributor to this inefficiency, was the lack of process managing my local development environment. I will be exploring how CodeCatalyst can help solve this challenge.  For this walkthrough, I want to add a new test that will allow local testing of Amazon DynamoDB. To achieve this, I will use a CodeCatalyst dev environment.

CodeCatalyst Dev Environments are managed cloud-based development environments that you can use to access and modify code stored in a source repository. You can launch a project specific dev environment that will automate check-out of your project’s repo or you can launch an empty environment to use for accessing third-party source providers.  You can learn more about CodeCatalyst Dev Environments in the CodeCatalyst User Guide.

CodeCatalyst user interface showing Create Dev Environment

Figure 1. Creating a new Dev Environment

To begin, I navigate to the Dev Environments page under the Code section of the navigaiton menu.  I then use the Create Dev Environment to launch my environment.  For this post, I am using the AWS Cloud9 IDE, but you can follow along with the IDE you are most comfortable using.  In the next screen, I select Work in New Branch and assign local_testing for the new branch name, and I am branching from main.  I leave the remaining default options and Create.

Create Dev Environment user interface with work in a new branch selected

Figure 2. Dev Environment Create Options

After waiting less than a minute, my IDE is ready in a new tab and I am ready to begin work.  The first thing I see in my dev environment is an information window asking me if I want to navigate to the Dev Environment Settings.  Because I need to enable local testing of Dynamodb, not only for myself, but other developers that will collaborate on this project, I need to update the project’s devfile.  I select to navigate to the settings tab because I know that contains information on the project’s devfile and allows me to access the file to edit.

AWS Toolkit prompting to Open Dev Environment Settings.

Figure 3. Toolkit Welcome Banner

Devfiles allow you to model a Dev Environment’s configuration and dependencies so that you can re-produce consisent Dev Environments and reduce the manual effort in setting up future environments.  The tools and application libraries included in your Dev Environment are defined by the devfile in the source repository of your project.  Since this project was created from a blueprint, there is one provided.  For blank projects, a default CodeCatalyst devfile is created when you first launch an environment.  To learn more about the devfile, see https://devfile.io.

In the settings tab, I find a link to the devfile that is configured.  When I click the edit button, a new file tab launches and I can now make changes.  I first add an env section to the container that hosts our dev environment.  By adding an environment variable and value, anytime a new dev environment is created from this project’s repository, that value will be included.  Next, I add a second container to the dev environment that will run DynamoDB locally.  I can do this by adding a new container component.  I use Amazon’s verified DynamoDB docker image for my environment. Attaching additional images allow you to extend the dev environment and include tools or services that can be made available locally.  My updates are highlighted in the green sections below.

Devfile.yaml with environment variable and DynamoDB container added

Figure 4. Example Devfile

I save my changes and navigate back to the Dev Environment Settings tab. I notice that my changes were automatically detected and I am prompted to restart my development environment for the changes to take effect.  Modifications to the devfile requires a restart. You can restart a dev environment using the toolkit, or from the CodeCatalyst UI.

AWS Toolkit prompt asking to restart the dev environment

Figure 5. Dev Environment Settings

After waiting a few seconds for my dev environment to restart, I am ready to write my test.  I use the IDE’s file explorer, expand the repo’s ./tests/unit folder, and create a new file named test_dynamodb.py.  Using the IS_LOCAL environment variable I configured in the devfile, I can include a conditional in my test that sets the endpoint that Amazon’s python SDK ( Boto3 ) will use to connect to the Dynamodb service.  This way, I can run tests locally before pushing my changes and still have tests complete successfully in my project’s workflow.  My full test file is included below.

Python unit test with local code added

Figure 6. Dynamodb test file

Now that I have completed my changes to the dev environment using the devfile and added a test, I am ready to run my test locally to verify.  I will use pytest to ensure the tests are passing before pushing any changes.  From the repo’s root folder, I run the command pip install -r requirements-dev.txt.  Once my dependencies are installed, I then issue the command pytest -k unit.  All tests pass as I expect.

Result of the pytest shown at the command line

Figure 7. Pytest test results

Rather than manually installing my development dependencies in each environment, I could also use the devfile to include commands and automate the execution of those commands during the dev environment lifecycle events.  You can refer to the links for commands and events for more information.

Finally, I am ready to push my changes back to my CodeCatalyst source repository.  I use the git extension of Cloud9 to review my changes.  After reviewing my changes are what I expect, I use the git extension to stage, commit, and push the new test file and the modified devfile so other collaborators can adopt the improvements I made.

Figure 8.  Changes reviewed in CodeCatalyst Cloud9 git extension.

Figure 8.  Changes reviewed in CodeCatalyst Cloud9 git extension.

Cleanup

If you have been following along with this workflow, you  should delete the resources you deployed so you do not continue to incur  charges. First, delete the two stacks that CDK deployed using the AWS CloudFormation console in the AWS account you associated when you launched the blueprint. These stacks will have names like mysfitsXXXXXWebStack and mysfitsXXXXXAppStack. Second, delete the project from CodeCatalyst by navigating to Project settings and choosing Delete project.

Conclusion

In this post, you learned how CodeCatalyst provides configurable on-demand dev environments.  You also learned how devfiles help you define a consistent experience for developing within a CodeCatalyst project.  Please follow our DevOps blog channel as I continue to explore how CodeCatalyst solve Maxine’s and other builders’ challenges.

About the author:

Ryan Bachman

Ryan Bachman is a Sr. Specialist Solutions Architect at AWS, and specializes in working with customers to improve their DevOps practices. Ryan has over 20 years of professional experience as a technologist, and has held roles in many different domains to include development, networking architecture, and technical product management. He is passionate about automation and helping customers increase software development productivity.

No-Fly List Exposed

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/no-fly-list-exposed.html

I can’t remember the last time I thought about the US no-fly list: the list of people so dangerous they should never be allowed to fly on an airplane, yet so innocent that we can’t arrest them. Back when I thought about it a lot, I realized that the TSA’s practice of giving it to every airline meant that it was not well protected, and it certainly ended up in the hands of every major government that wanted it.

The list is back in the news today, having been left exposed on an insecure airline computer. (The airline is CommuteAir, a company so obscure that I’ve never heard of it before.)

This is, of course, the problem with having to give a copy of your secret list to lots of people.

Publisher’s Weekly Review of A Hacker’s Mind

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/publishers-weekly-review-of-a-hackers-mind.html

Publisher’s Weekly reviewed A Hacker’s Mind—and it’s a starred review!

“Hacking is something that the rich and powerful do, something that reinforces existing power structures,” contends security technologist Schneier (Click Here to Kill Everybody) in this excellent survey of exploitation. Taking a broad understanding of hacking as an “activity allowed by the system that subverts the… system,” Schneier draws on his background analyzing weaknesses in cybersecurity to examine how those with power take advantage of financial, legal, political, and cognitive systems. He decries how venture capitalists “hack” market dynamics by subverting the pressures of supply and demand, noting that venture capital has kept Uber afloat despite the company having not yet turned a profit. Legal loopholes constitute another form of hacking, Schneier suggests, discussing how the inability of tribal courts to try non-Native individuals means that many sexual assaults of Native American women go unprosecuted because they were committed by non-Native American men. Schneier outlines strategies used by corporations to capitalize on neural processes and “hack… our attention circuits,” pointing out how Facebook’s algorithms boost content that outrages users because doing so increases engagement. Elegantly probing the mechanics of exploitation, Schneier makes a persuasive case that “we need society’s rules and laws to be as patchable as your computer.” With lessons that extend far beyond the tech world, this has much to offer.

The book will be published on February 7. Here’s the book’s webpage. You can pre-order a signed copy from me here.

Friday Squid Blogging: Another Giant Squid Captured on Video

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/friday-squid-blogging-another-giant-squid-captured-on-video.html

Here’s a new video of a giant squid, filmed in the Sea of Japan.

I believe it’s injured. It’s so close to the surface, and not really moving very much.

“We didn’t see the kinds of agile movements that many fish and marine creatures normally show,” he said. “Its tentacles and fins were moving very slowly.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Real-World Steganography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/real-world-steganography.html

From an article about Zheng Xiaoqing, an American convicted of spying for China:

According to a Department of Justice (DOJ) indictment, the US citizen hid confidential files stolen from his employers in the binary code of a digital photograph of a sunset, which Mr Zheng then mailed to himself.

Security Analysis of Threema

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/security-analysis-of-threema.html

A group of Swiss researchers have published an impressive security analysis of Threema.

We provide an extensive cryptographic analysis of Threema, a Swiss-based encrypted messaging application with more than 10 million users and 7000 corporate customers. We present seven different attacks against the protocol in three different threat models. As one example, we present a cross-protocol attack which breaks authentication in Threema and which exploits the lack of proper key separation between different sub-protocols. As another, we demonstrate a compression-based side-channel attack that recovers users’ long-term private keys through observation of the size of Threema encrypted back-ups. We discuss remediations for our attacks and draw three wider lessons for developers of secure protocols.

From a news article:

Threema has more than 10 million users, which include the Swiss government, the Swiss army, German Chancellor Olaf Scholz, and other politicians in that country. Threema developers advertise it as a more secure alternative to Meta’s WhatsApp messenger. It’s among the top Android apps for a fee-based category in Switzerland, Germany, Austria, Canada, and Australia. The app uses a custom-designed encryption protocol in contravention of established cryptographic norms.

The company is performing the usual denials and deflections:

In a web post, Threema officials said the vulnerabilities applied to an old protocol that’s no longer in use. It also said the researchers were overselling their findings.

“While some of the findings presented in the paper may be interesting from a theoretical standpoint, none of them ever had any considerable real-world impact,” the post stated. “Most assume extensive and unrealistic prerequisites that would have far greater consequences than the respective finding itself.”

Left out of the statement is that the protocol the researchers analyzed is old because they disclosed the vulnerabilities to Threema, and Threema updated it.

Manually Approving Security Changes in CDK Pipeline

Post Syndicated from original https://aws.amazon.com/blogs/devops/manually-approving-security-changes-in-cdk-pipeline/

In this post I will show you how to add a manual approval to AWS Cloud Development Kit (CDK) Pipelines to confirm security changes before deployment. With this solution, when a developer commits a change, CDK pipeline identifies an IAM permissions change, pauses execution, and sends a notification to a security engineer to manually approve or reject the change before it is deployed.

Introduction

In my role I talk to a lot of customers that are excited about the AWS Cloud Development Kit (CDK). One of the things they like is that L2 constructs often generate IAM and other security policies. This can save a lot of time and effort over hand coding those policies. Most customers also tell me that the policies generated by CDK are more secure than the policies they generate by hand.

However, these same customers are concerned that their security engineering team does not know what is in the policies CDK generates. In the past, these customers spent a lot of time crafting a handful of IAM policies that developers can use in their apps. These policies were well understood, but overly permissive because they were often reused across many applications.

Customers want more visibility into the policies CDK generates. Luckily CDK provides a mechanism to approve security changes. If you are using CDK, you have probably been prompted to approve security changes when you run cdk deploy at the command line. That works great on a developer’s machine, but customers want to build the same confirmation into their continuous delivery pipeline. CDK provides a mechanism for this with the ConfirmPermissionsBroadening action. Note that ConfirmPermissionsBroadening is only supported by the AWS CodePipline deployment engine.

Background

Before I talk about ConfirmPermissionsBroadening, let me review how CDK creates IAM policies. Consider the “Hello, CDK” application created in AWS CDK Workshop. At the end of this module, I have an AWS Lambda function and an Amazon API Gateway defined by the following CDK code.

// defines an AWS Lambda resource
const hello = new lambda.Function(this, 'HelloHandler', {
  runtime: lambda.Runtime.NODEJS_14_X,    // execution environment
  code: lambda.Code.fromAsset('lambda'),  // code loaded from "lambda" directory
  handler: 'hello.handler'                // file is "hello", function is "handler"
});

// defines an API Gateway REST API resource backed by our "hello" function.
new apigw.LambdaRestApi(this, 'Endpoint', {
  handler: hello
});

Note that I did not need to define the IAM Role or Lambda Permissions. I simply passed a refence to the Lambda function to the API Gateway (line 10 above). CDK understood what I was doing and generated the permissions for me. For example, CDK generated the following Lambda Permission, among others.

{
  "Effect": "Allow",
  "Principal": {
    "Service": "apigateway.amazonaws.com"
  },
  "Action": "lambda:InvokeFunction",
  "Resource": "arn:aws:lambda:us-east-1:123456789012:function:HelloHandler2E4FBA4D",
  "Condition": {
    "ArnLike": {
      "AWS:SourceArn": "arn:aws:execute-api:us-east-1:123456789012:9y6ioaohv0/prod/*/"
    }
  }
}

Notice that CDK generated a narrowly scoped policy, that allows a specific API (line 10 above) to call a specific Lambda function (line 7 above). This policy cannot be reused elsewhere. Later in the same workshop, I created a Hit Counter Construct using a Lambda function and an Amazon DynamoDB table. Again, I associated them using a single line of CDK code.

table.grantReadWriteData(this.handler);

As in the prior example, CDK generated a narrowly scoped IAM policy. This policy allows the Lambda function to perform certain actions (lines 4-11) on a specific table (line 14 below).

{
  "Effect": "Allow",
  "Action": [
    "dynamodb:BatchGetItem",
    "dynamodb:ConditionCheckItem",
    "dynamodb:DescribeTable",
    "dynamodb:GetItem",
    "dynamodb:GetRecords",
    "dynamodb:GetShardIterator",
    "dynamodb:Query",
    "dynamodb:Scan"
  ],
  "Resource": [
    "arn:aws:dynamodb:us-east-1:123456789012:table/HelloHitCounterHits"
  ]
}

As you can see, CDK is doing a lot of work for me. In addition, CDK is creating narrowly scoped policies for each resource, rather than sharing a broadly scoped policy in multiple places.

CDK Pipelines Permissions Checks

Now that I have reviewed how CDK generates policies, let’s discuss how I can use this in a Continuous Deployment pipeline. Specifically, I want to allow CDK to generate policies, but I want a security engineer to review any changes using a manual approval step in the pipeline. Of course, I don’t want security to be a bottleneck, so I will only require approval when security statements or traffic rules are added. The pipeline should skip the manual approval if there are no new security rules added.

Let’s continue to use CDK Workshop as an example. In the CDK Pipelines module, I used CDK to configure AWS CodePipeline to deploy the “Hello, CDK” application I discussed above. One of the last things I do in the workshop is add a validation test using a post-deployment step. Adding a permission check is similar, but I will use a pre-deployment step to ensure the permission check happens before deployment.

First, I will import ConfirmPermissionsBroadening from the pipelines package

import {ConfirmPermissionsBroadening} from "aws-cdk-lib/pipelines";

Then, I can simply add ConfirmPermissionsBroadening to the deploySatage using the addPre method as follows.

const deploy = new WorkshopPipelineStage(this, 'Deploy');
const deployStage = pipeline.addStage(deploy);

deployStage.addPre(    
  new ConfirmPermissionsBroadening("PermissionCheck", {
    stage: deploy
})

deployStage.addPost(
    // Post Deployment Test Code Omitted
)

Once I commit and push this change, a new manual approval step called PermissionCheck.Confirm is added to the Deploy stage of the pipeline. In the future, if I push a change that adds additional rules, the pipeline will pause here and await manual approval as shown in the screenshot below.

Figure 1. Pipeline waiting for manual review

Figure 1. Pipeline waiting for manual review

When the security engineer clicks the review button, she is presented with the following dialog. From here, she can click the URL to see a summary of the change I am requesting which was captured in the build logs. She can also choose to approve or reject the change and add comments if needed.

Figure 2. Manual review dialog with a link to the build logsd

Figure 2. Manual review dialog with a link to the build logs

When the security engineer clicks the review URL, she is presented with the following sumamry of security changes.

Figure 3. Summary of security changes in the build logs

Figure 3. Summary of security changes in the build logs

The final feature I want to add is an email notification so the security engineer knows when there is something to approve. To accomplish this, I create a new Amazon Simple Notification Service (SNS) topic and subscription and associate it with the ConfirmPermissionsBroadening Check.

// Create an SNS topic and subscription for security approvals
const topic = new sns.Topic(this, 'SecurityApproval’);
topic.addSubscription(new subscriptions.EmailSubscription('[email protected]')); 

deployStage.addPre(    
  new ConfirmPermissionsBroadening("PermissionCheck", {
    stage: deploy,
    notificationTopic: topic
})

With the notification configured, the security engineer will receive an email when an approval is needed. She will have an opportunity to review the security change I made and assess the impact. This gives the security engineering team the visibility they want into the policies CDK is generating. In addition, the approval step is skipped if a change does not add security rules so the security engineer does not become a bottle neck in the deployment process.

Conclusion

AWS Cloud Development Kit (CDK) automates the generation of IAM and other security policies. This can save a lot of time and effort but security engineering teams want visibility into the policies CDK generates. To address this, CDK Pipelines provides the ConfirmPermissionsBroadening action. When you add ConfirmPermissionsBroadening to your CI/CD pipeline, CDK will wait for manual approval before deploying a change that includes new security rules.

About the author:

Brian Beach

Brian Beach has over 20 years of experience as a Developer and Architect. He is currently a Principal Solutions Architect at Amazon Web Services. He holds a Computer Engineering degree from NYU Poly and an MBA from Rutgers Business School. He is the author of “Pro PowerShell for Amazon Web Services” from Apress. He is a regular author and has spoken at numerous events. Brian lives in North Carolina with his wife and three kids.

AI and Political Lobbying

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/ai-and-political-lobbying.html

Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing.

Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human.

But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes—not through voting, but through lobbying.

ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.

Automatically generated comments aren’t a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers.

Platforms have gotten better at removing “coordinated inauthentic behavior.” Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislators’ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an AI system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.

When we humans do these things, we call it lobbying. Successful agents in this sphere pair precision message writing with smart targeting strategies. Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. AI could provide techniques for that as well.

A system that can understand political networks, if paired with the textual-generation capabilities of ChatGPT, could identify the member of Congress with the most leverage over a particular policy area—say, corporate taxation or military spending. Like human lobbyists, such a system could target undecided representatives sitting on committees controlling the policy of interest and then focus resources on members of the majority party when a bill moves toward a floor vote.

Once individuals and strategies are identified, an AI chatbot like ChatGPT could craft written messages to be used in letters, comments—anywhere text is useful. Human lobbyists could also target those individuals directly. It’s the combination that’s important: Editorial and social media comments only get you so far, and knowing which legislators to target isn’t itself enough.

This ability to understand and target actors within a network would create a tool for AI hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of AI may be so hard to detect—particularly if it is being used strategically to guide human actors.

The data necessary to train such strategic targeting systems will only grow with time. Open societies generally make their democratic processes a matter of public record, and most legislators are eager—at least, performatively so—to accept and respond to messages that appear to be from their constituents.

Maybe an AI system could uncover which members of Congress have significant sway over leadership but still have low enough public profiles that there is only modest competition for their attention. It could then pinpoint the SuperPAC or public interest group with the greatest impact on that legislator’s public positions. Perhaps it could even calibrate the size of donation needed to influence that organization or direct targeted online advertisements carrying a strategic message to its members. For each policy end, the right audience; and for each audience, the right message at the right time.

What makes the threat of AI-powered lobbyists greater than the threat already posed by the high-priced lobbying firms on K Street is their potential for acceleration. Human lobbyists rely on decades of experience to find strategic solutions to achieve a policy outcome. That expertise is limited, and therefore expensive.

AI could, theoretically, do the same thing much more quickly and cheaply. Speed out of the gate is a huge advantage in an ecosystem in which public opinion and media narratives can become entrenched quickly, as is being nimble enough to shift rapidly in response to chaotic world events.

Moreover, the flexibility of AI could help achieve influence across many policies and jurisdictions simultaneously. Imagine an AI-assisted lobbying firm that can attempt to place legislation in every single bill moving in the US Congress, or even across all state legislatures. Lobbying firms tend to work within one state only, because there are such complex variations in law, procedure and political structure. With AI assistance in navigating these variations, it may become easier to exert power across political boundaries.

Just as teachers will have to change how they give students exams and essay assignments in light of ChatGPT, governments will have to change how they relate to lobbyists.

To be sure, there may also be benefits to this technology in the democracy space; the biggest one is accessibility. Not everyone can afford an experienced lobbyist, but a software interface to an AI system could be made available to anyone. If we’re lucky, maybe this kind of strategy-generating AI could revitalize the democratization of democracy by giving this kind of lobbying power to the powerless.

However, the biggest and most powerful institutions will likely use any AI lobbying techniques most successfully. After all, executing the best lobbying strategy still requires insiders—people who can walk the halls of the legislature—and money. Lobbying isn’t just about giving the right message to the right person at the right time; it’s also about giving money to the right person at the right time. And while an AI chatbot can identify who should be on the receiving end of those campaign contributions, humans will, for the foreseeable future, need to supply the cash. So while it’s impossible to predict what a future filled with AI lobbyists will look like, it will probably make the already influential and powerful even more so.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Edited to Add: After writing this, we discovered that a research group is researching AI and lobbying:

We used autoregressive large language models (LLMs, the same type of model behind the now wildly popular ChatGPT) to systematically conduct the following steps. (The full code is available at this GitHub link: https://github.com/JohnNay/llm-lobbyist.)

  1. Summarize official U.S. Congressional bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3.
  2. Using either the original official bill summary (if it was not too long), or the summarized version:
    1. Assess whether the bill may be relevant to a company based on a company’s description in its SEC 10K filing.
    2. Provide an explanation for why the bill is relevant or not.
    3. Provide a confidence level to the overall answer.
  3. If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation.

Here is the paper.

The FBI Identified a Tor User

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/the-fbi-identified-a-tor-user.html

No details, though:

According to the complaint against him, Al-Azhari allegedly visited a dark web site that hosts “unofficial propaganda and photographs related to ISIS” multiple times on May 14, 2019. In virtue of being a dark web site—­that is, one hosted on the Tor anonymity network—­it should have been difficult for the site owner’s or a third party to determine the real IP address of any of the site’s visitors.

Yet, that’s exactly what the FBI did. It found Al-Azhari allegedly visited the site from an IP address associated with Al-Azhari’s grandmother’s house in Riverside, California. The FBI also found what specific pages Al-Azhari visited, including a section on donating Bitcoin; another focused on military operations conducted by ISIS fighters in Iraq, Syria, and Nigeria; and another page that provided links to material from ISIS’s media arm. Without the FBI deploying some form of surveillance technique, or Al-Azhari using another method to visit the site which exposed their IP address, this should not have been possible.

There are lots of ways to de-anonymize Tor users. Someone at the NSA gave a presentation on this ten years ago. (I wrote about it for the Guardian in 2013, an essay that reads so dated in light of what we’ve learned since then.) It’s unlikely that the FBI uses the same sorts of broad surveillance techniques that the NSA does, but it’s certainly possible that the NSA did the surveillance and passed the information to the FBI.