Tag Archives: Sidebar

Which VPN Providers Really Take Anonymity Seriously in 2020?

Post Syndicated from Ernesto original https://torrentfreak.com/best-vpn-anonymous-no-logging/

The VPN industry is booming and prospective users have hundreds of options to pick from. All claim to be the best, but some are more anonymous than others.

The VPN review business is also flourishing. Just do a random search for “best VPN service” or “VPN review” and you’ll see dozens of sites filled with recommendations and preferred picks.

We don’t want to make any recommendations. When it comes to privacy and anonymity, an outsider can’t offer any guarantees. Vulnerabilities are always lurking around the corner and even with the most secure VPN, you still have to trust the VPN company with your data.

Instead, we aim to provide an unranked overview of VPN providers, asking them questions we believe are important. Many of these questions relate to anonymity and security, and the various companies answer them in their own words.

We hope that this helps users to make an informed choice. However, we stress that users themselves should always make sure that their VPN setup is secure, working correctly, and not leaking.

This year’s questions and answers are listed below. We have included all VPNs we contacted that don’t keep extensive logs or block torrent traffic on all of their servers.

The order of the providers is arbitrary and doesn’t carry any value. A few links in this article are affiliate links. This won’t cost you a penny more but it helps us to keep the lights on.

1. Do you keep (or share with third parties) ANY data that would allow you to match an IP-address and a timestamp to a current or former user of your service? If so, exactly what information do you hold/share and for how long?

2. What is the name under which your company is incorporated (+ parent companies, if applicable) and under which jurisdiction does your company operate?

3. What tools are used to monitor and mitigate abuse of your service, including limits on concurrent connections if these are enforced?

4. Do you use any external email providers (e.g. Google Apps), analytics, or support tools ( e.g Live support, Zendesk) that hold information provided by users?

5. In the event you receive a DMCA takedown notice or a non-US equivalent, how are these handled?

6. What steps would be taken in the event a court orders your company to identify an active or former user of your service? How would your company respond to a court order that requires you to log activity for a user going forward? Have these scenarios ever played out in the past?

7. Is BitTorrent and other file-sharing traffic allowed on all servers? If not, why? Do you provide port forwarding services? Are any ports blocked?

8. Which payment systems/providers do you use? Do you take any measures to ensure that payment details can’t be linked to account usage or IP-assignments?

9. What is the most secure VPN connection and encryption algorithm you would recommend to your users?

10. Do you provide tools such as “kill switches” if a connection drops and DNS/IPv6 leak protection? Do you support Dual Stack IPv4/IPv6 functionality?

11. Are any of your VPN servers hosted by third parties? If so, what measures do you take to prevent those partners from snooping on any inbound and/or outbound traffic? Do you use your own DNS servers?

12. In which countries are your servers physically located? Do you offer virtual locations?

Tip: Here’s a list of all VPN providers covered here, with direct links to the answers.

Private Internet Access

1. We do not store any logs relating to traffic, session, DNS or metadata. There are no logs kept for any person or entity to match an IP address and a timestamp to a current or former user of our service. In summary, we do not log, period. Privacy is our policy.

2. Private Internet Access, Inc. is an Indiana corporation, under the parent company Kape Technologies PLC, a company listed on the London Stock Exchange.

3. We have an active, proprietary system in place to help mitigate abuse including attempts to bypass our simultaneous connection limit.

4. At the moment we are using Google Apps Suite and Google Analytics on our website only with interest and demographics tracking disabled and anonymized IP addresses enabled. We utilize DeskPro for our support team.

5. Primarily, we stress that our service is not intended to be used for illegal activities and copyright infringements and we request our users to comply with this when accepting our Terms of Use. That said, we have an active, proprietary system in place to help mitigate abuse that preserves the privacy of our customers while following the letter of the law.

6. Every subpoena is scrutinized to the highest extent for compliance with both the “spirit” and “letter of the law.” While we have not received any valid court orders to identify an active or former user of service, we do periodically receive subpoenas from law enforcement agencies that we scrutinize for compliance and respond accordingly. If forced to provide logs by a court of law, Private Internet Access has verified in court multiple times that we keep no logs. Our company would fight a court order that requires us to do any sort of logging.

7. BitTorrent and file-sharing traffic are not discriminated against or throttled. We do not censor our traffic, period. We do provide port forwarding services on some of our VPN servers, check here for the full list of PIA VPN servers that support port forwarding.

8. We utilize a variety of payment systems, including, but not limited to: PayPal, Credit Card (with Stripe), Amazon, Google, Bitcoin, Bitcoin Cash, Zcash, CashU, OKPay, PaymentWall, and even support payment using major store-bought gift cards. Payment details are only linked to accounts for billing purposes. IP assignments and other user activity on our VPN servers aren’t linkable to specific accounts or payment details because of our strict and demonstrated no-log policy.

9. At the moment, the most secure and practical VPN connection and encryption algorithm that we recommend to our users would be our cipher suite of AES-256 + RSA4096 + SHA256 over OpenVPN.

10. Our users gain access to a plethora of additional tools, including but not limited to a Kill Switch, IPv6 Leak Protection, DNS Leak Protection, Shared IP System, and MACE, which protect users from malware, trackers, and ads.

11. We utilize our own bare metal servers in third-party data centers that are operated by trusted business partners with whom we have completed serious due diligence. When countries or data centers fail to meet our high privacy standards, we remove our VPN server presence as has previously happened in Brazil, South Korea, Germany, and Russia.

12. We currently operate 3,395 servers across 64 locations in 44 countries. For more information on what countries are available, please visit our PIA network page. All of our locations are physical and not virtualized.

Private Internet Access details

ExpressVPN

1. No, ExpressVPN doesn’t keep any connection or activity logs, including never logging browsing history, data contents, DNS requests, timestamps, source IPs, outgoing IPs, or destination IPs.

2. Express VPN International Ltd is a British Virgin Islands (BVI) company.

3. We reserve the right to block specific abusive traffic to protect the server network and other ExpressVPN customers. With regards to limits on the number of devices, our systems are merely able to identify how many active sessions a given license has at a given moment in time and use that counter to decide whether a license is allowed to create one additional session. This counter is temporary and is not tracked over time.

4. We use Zendesk for support tickets and SnapEngage for live chat support; we have assessed the security profiles of both and consider them to be secure platforms. We use Google Analytics and cookies to collect marketing metrics for our website and several external tools for collecting crash reports (only if a user opts into sharing these reports). ExpressVPN is committed to protecting the privacy of our users, and our practices are discussed in detail in our comprehensive Privacy Policy.

5. As we do not keep any data or logs that could link specific activity to a given user, ExpressVPN does not identify or report users as a result of DMCA notices. User privacy and anonymity are always preserved.

6. Legally our company is only bound to respect subpoenas and court orders when they originate from the British Virgin Islands government or in conjunction with BVI authorities via a mutual legal assistance treaty. As a general rule, we reply to law enforcement inquiries by informing the investigator that we do not possess any data that could link activity or IP addresses to a specific user. Regarding a demand that we log activity going forward: Were anyone ever to make such a request, we would refuse to re-engineer our systems in a way that infringes on the privacy protections that our customers trust us to uphold.

Not storing any sensitive information also protects user privacy and security in the event of law enforcement gaining physical access to servers. This was proven in a high-profile case in Turkey in which law enforcement seized a VPN server leased by ExpressVPN but could not find any server logs that would enable investigators to link activity to a user or even determine which users, or whether a specific user, were connected at a given time.

7. We do not believe in restricting or censoring any type of traffic. ExpressVPN allows all traffic, including BitTorrent and other file-sharing traffic (without rerouting), from all of our VPN servers. At the moment, we do not support port forwarding.

8. ExpressVPN accepts all major credit cards, PayPal, and a large number of local payment options. We also accept Bitcoin, which we recommend for those who seek maximum privacy in relation to their form of payment. As we do not log user activity, IP addresses, or timestamps, there is no way for ExpressVPN or any external party to link payment details entered on our website with a user’s VPN activities.

9. By default, ExpressVPN automatically chooses the protocol best-suited to your network depending on a variety of factors. For example, our primary protocol, OpenVPN, uses a 4096-bit CA with AES-256-GCM encryption, TLSv1.2, and SHA256 signatures to authenticate traffic.

10. Yes, our Network Lock feature, which is turned on by default, prevents all types of traffic including IPv4, IPv6, and DNS from leaking outside of the VPN. We do not yet support IPv6 routing through the VPN tunnel. ExpressVPN also protects users from data leaks in a number of ways.

11. Our VPN servers are hosted in trusted data centers with strong security practices, where the data center employees do not have server credentials. The efforts we take to secure our VPN server infrastructure are extensive and have been audited. For example, with our proprietary TrustedServer technology, we reinstall the entire VPN server software stack from scratch with every reboot, ensuring we have complete confidence in what software is running on each of our servers and that no unauthorized software or backdoors can persist on these servers. More details are available here.

We run our own logless DNS on every server, meaning no personally identifiable data is ever stored. We do not use third-party DNS.

12. ExpressVPN has over 3,000 servers in 94 countries. For more than 97% of these servers, the physical server and the associated IP addresses are located in the same country. For countries where it is difficult to find servers that meet ExpressVPN’s rigorous standards, we use virtual locations. The specific countries are published on our website here.

ExpressVPN details

NordVPN

1. We do not keep connection logs nor timestamps that could allow us to match customers with their activity.

2. Tefincom S.A., operating under the jurisdiction of Panama.

3. We are only able to see the server load. We also use an automated tool that limits the maximum number of concurrent connections to six per customer. Apart from that, we do not use any other tools.

4. NordVPN uses third-party data processors for emailing services and to collect basic website and app analytics. We use Iterable for correspondence, Zendesk to provide customer support, Google Analytics to monitor website and app data, as well as Crashlytics, Firebase Analytics and Appsflyer to monitor application data. All third-party services we use are bound by a contract with us to never use the information of our users for their own purposes and not to disclose the information to any third parties unrelated to the service.

5. NordVPN is a transmission service provider, operating in Panama. DMCA takedown notices are not applicable to us.

6. If the order or subpoena is issued by a Panamanian court, we would have to provide the information if we had any. However, our zero-log policy means that we do not store any information about our users’ online activity – only their email address and basic payment info. So far, we haven’t had any such cases.

7. We do not restrict any BitTorrent or other file-sharing applications on most of our servers. We have optimized a number of our servers specifically for file-sharing. At the moment, we do not offer port forwarding and block outgoing SMTP 25 and NetBIOS ports.

8. Our customers are able to pay via all major credit cards, regionally localized payment solutions and cryptocurrencies. Our payment processing partners collect basic billing information for payment processing and refund requests, but they cannot be connected to an internet activity of a particular customer. Bitcoin is the most anonymous option, as it does not link the payment details to the user identity or other personal information.

9. All our protocols are secure, however, the most advanced encryption is used by NordLynx. NordLynx is based on the WireGuard® protocol and uses ChaCha20 for encryption, Poly1305 for authentication and integrity, and Curve25519 for the Elliptic-curve Diffie–Hellman key agreement protocol.

10. We provide automatic kill switches and DNS leak protection. Dual-Stack IPv4/IPv6 functionality is not yet supported with our service; however, all NordVPN apps offer an integrated IPv6 Leak Protection.

11. Most of our servers are leased; however, the security of our infrastructure is our top priority. To elevate our standards to a higher level, we have partnered with VerSprite, a global leader in cybersecurity consulting and advisory services. Due to our special server configuration, no one is able to collect or retain any data, ensuring compliance with our no-logs policy. We do have our own DNS servers, and all DNS requests travel through a VPN tunnel. Our customers can also manually setup any DNS server they like.

12. We do not offer virtual locations, our servers are located in places we state they are. At the time of writing, we have almost 6000 servers in 59 countries.

NordVPN details

HideIPVPN

1. We do not store or share any such information that allows doing that. The only information we store is that related to the payment process. But it is not shared anywhere outside the payment systems.

2. The registered name of the company is Server Management LLC and we operate under US jurisdiction.

3. A single subscription can be used simultaneously for three connections. Abuses of service usually mean using non-P2P servers for torrents or DMCA notices.

Also, our no-log policy makes it impossible to track who downloaded/uploaded any data from the internet using our VPN. We use IPtables plugin to block P2P traffic on servers where P2P is not explicitly allowed. We block outgoing mail on port 25 to prevent spamming activity.

4. We use the live chat provided by tawk.to and Google Apps for incoming email. For outgoing email, we use our own SMTP server.

5. Since no information is stored on any of our servers there is nothing that we can take down. We reply to the data center or copyright holder that we do not log our user’s traffic and we use shared IP-addresses, which make it impossible to track who downloaded any data from the internet using our VPN.

6. HideIPVPN may disclose information, including but not limited to, information concerning a client, to comply with a court order, subpoena, summons, discovery request, warrant, statute, regulation, or governmental request. But because we have a no-logs policy and we use shared IPs there won’t be anything to disclose, excepting billing details. This has never happened before.

7. This type of traffic is welcomed on our German (DE VPN), Dutch (NL VPN), Luxembourg (LU VPN) and Lithuanian (LT VPN) servers. It is not allowed on US, UK, Canada, Poland, Singapore, and French servers as stated in our TOS. The reason for this is our agreements with data centers. We do not allow port forwarding and we block ports 22 and 25 for security reasons.

8. HideIPVPN accepts the following methods: PayPal, Bitcoin, Credit & Debit cards, JCB, American Express, Diners Club International, Discover. All our clients’ billing details are stored in the WHMCS billing system.

9. SoftEther VPN protocol looks very promising and secure. Users can currently use our VPN applications on Windows and OSX systems. Both versions have a “kill switch” feature in case the connection drops. Our apps can re-establish a VPN connection and once active restart closed applications. Also, the app has the option to enable DNS leak protection.

10. Yes, our free VPN apps have both features built-in. It is worth mentioning that our free VPN apps for Windows and macOS – there is a brand new version of them – have even more cool and unique features. We were one of the first – if not THE FIRST – to introduce as you call it a “kill switch” in our apps. Now, we give users the ability to easily choose the best, “fastest” VPN server available for them in their location – a “Sort by speed” option.

11. We don’t have physical control of our VPN servers. Servers are outsourced in premium data-centers with high-quality Tier 1 networks. Our servers are self-managed and access is restricted to our personnel only.

12. At the moment we have VPN servers located in 11 countries – US, UK, Netherlands, Germany, Luxembourg, Lithuania, Canada, Poland, France, Australia and Singapore.

HideIPVPN website

IVPN

1. No. We believe that not logging VPN connection related data is fundamental to any privacy service regardless of the security or policies implemented to protect the log data. Specifically, we don’t log: traffic, DNS requests, connection timestamps and durations, bandwith, IP address or any account activity except simultaneous connections.

2. Privatus Limited, Gibraltar. No parent or holding companies.

3. We limit simultaneous connections by maintaining a temporary counter on a central server that is deleted when the user disconnects (we detail this process in our Privacy Policy).

4. No. We made a strategic decision from day one that no company or customer data would ever be stored on third-party systems. All our internal services run on our own dedicated servers that we setup, configure and manage. No third parties have access to our servers or data. We don’t host any external scripts, web trackers or tracking pixels on our website. We also refuse to engage in advertising on platforms with surveillance-based business models, like Google or Facebook.

5. Our legal department sends a reply stating that we do not store content on our servers and that our VPN servers act only as a conduit for data. In addition, we inform them that we never store the IP addresses of customers connected to our network nor are we legally required to do so. We have a detailed Legal Process Guideline published on our website.

6. Firstly, this has never happened. However, if asked to identify a customer based on a timestamp and/or IP address then we would reply factually that we do not store this information. If legally compelled to log activity going forward we would do everything in our power to alert the relevant customers directly (or indirectly through our warrant canary).

7. We do not block any traffic or ports on any servers. We provide a port forwarding service.

8. We accept Bitcoin, Cash, PayPal, and credit cards. When using cash there is no link to a user account within our system. When using Bitcoin, the transaction is processed through our self-hosted BitPay server. We store the Bitcoin transaction ID in our system.
If you wish to remain anonymous to IVPN you should take the necessary precautions when purchasing Bitcoin. When paying with PayPal or a credit card a token is stored that is used to process recurring payments but this is not linked in any way to VPN account usage or IP-assignments.

9. We offer and recommend WireGuard, a high-performance protocol that utilizes state-of-the-art cryptography. Since its merge into Linux Kernel (v5.6) and the release of 1.0 version of the protocol, we consider it to be ready for wide-scale use. Alternatively, we also offer OpenVPN with RSA-4096 / AES-256-GCM, which we also believe is more than secure enough for the purposes for which we provide our service.

10. Yes, the IVPN client offers an advanced VPN firewall that blocks every type of IP leak possible including IPv6, DNS, network failures, WebRTC STUN etc. Our VPN clients work on a dual-stack IPv4/IPv6 but we currently only support IPv4 on our VPN gateways.

11. We use bare metal dedicated servers leased from third-party data centers in each country where we have a presence. We install each server using our own custom images and employ full disk encryption to ensure that if a server is ever seized the data is worthless.
We also operate an exclusive multi-hop network allowing customers to choose an entry and exit server in different jurisdictions which would make the task of legally gaining access to servers at the same time significantly more difficult. We operate our own network of log-free DNS servers that are only accessible to our customers through the VPN tunnel.

12. We have servers in 32 countries. No virtual locations. Full list of servers is available here.

IVPN website

AzireVPN

1. No, we do not record or store any logs related to our services. No traffic, user activity, timestamps, IP addresses, number of active and total sessions, DNS requests, or any other kind of logs are stored.

2. The registered company name is Netbouncer AB and we operate under Swedish jurisdiction where there are no data retention laws that apply to VPN providers.

3. We took extra security steps to harden our servers. They are running using Blind Operator mode, a software module that ensures that it is extremely difficult to set up any kind of traffic monitoring. Abuses like incoming DDoS attacks are usually mitigated with UDP filtering on the source port used by an attacker.

4. No, we do not rely on and refuse to use external third-party systems. We run our own email infrastructure and encourage people to use PGP encryption for reaching us. The ticketing support system, website analytics (Piwik, with anonymization settings) and other tools are hosted in-house on open-source software.

5. We politely inform the sender that we do not keep any logs and are unable to identify a user.

6. In the case that a valid court order is issued, we will inform the other party that we are unable to identify an active or former user of our service due to our particular infrastructure. In that case, they would probably force us to handover physical access to the server, which they would have to reboot to disable the Blind Operator mode and to be able to gain any kind of access. Since we are running our custom system images directly into RAM, all data would be lost.

So far, we have never received any court order and no personal information has ever been given out.

7. Yes, BitTorrent, peer-to-peer and file-sharing traffic is allowed and treated equally to any other traffic on all of our servers. We do not provide port forwarding services yet, however, we do provide a public IPv4+IPv6 addresses mode on OpenVPN which assigns IP addresses being used by only one user at a time for the whole duration of the connection to the server. In this mode, all ports are opened, with the exception of unencrypted outgoing port 25 TCP, usually used by the SMTP protocol, which is blocked to prevent abuse by spammers.

8. As of now, we offer a variety of payment options including anonymous methods such as Bitcoin, Litecoin, Monero and some other cryptocurrencies, and cash money via postal mail. We also offer PayPal (with or without recurring payments), credit cards (VISA, MasterCard and American Express through Paymentwall) and Swish. We do not store sensitive payment information on our servers, we only retain an internal reference code for order confirmation, and the customer connected to the transaction information is removed after 6 months.

9. We recommend our users to use our WireGuard servers, using official clients available on Windows, Linux, macOS and OpenWrt (routers). We propose an easy-to-use WireGuard-based client on Android and iOS.

– Data channel cipher: ChaCha20 with Poly1305 for authentication and data integrity.
– Authenticated key exchange: Noise Protocol Framework’s Noise_IKpsk2, using Curve25519, Blake2s, ChaCha20, and Poly1305. It uses a formally verified construction.

10. We offer a custom open-source VPN application called azclient for all major desktop platforms (Windows, macOS and Linux) which currently supports OpenVPN. Its source code is released on Github under the GPLv2 license. We are currently revamping this client to a WireGuard-based one and are planning to add a kill switch and DNS leak protection features to it in the future.

As we provide our users with a full dual-stack IPv4+IPv6 functionality on all servers and VPN protocols, we do not need to provide any IPv6 leak protection. Our tunnels are natively supporting IPv6 even from IPv4 only Internet lines, by tunneling IPv6 traffic into IPv4 transparently. Also, our WireGuard servers can be reached through both IPv4 and IPv6.

11. We physically own all our servers in all locations, co-located in closed racks in different data centers around the world meeting our strict security criteria, using dedicated network links and carefully chosen network upstream providers for maximum privacy and network quality. We host our own non-logging DNS servers in different locations.

12. As of now, we operate across 11 locations on 3 continents. New locations in France, Germany, Romania, Spain and Switzerland are planned soon. There are no virtual locations.

AzireVPN website

Windscribe

1. No.

2. Windscribe Limited. Ontario, Canada.

3. Byte count of all traffic sent through the network in a one month period as well as a count of parallel connections at any given moment.

4. No. Everything is self-hosted.

5. Our transparency policy is available here.

6. Under Canadian law, a VPN company cannot be compelled to wiretap users. We can be legally compelled to provide the data that we already have (as per our ToS) and we would have to comply with a valid Canadian court order. Since we do not store any identifying info that can link an IP to an account, the fact that emails are optional to register, and the service can be paid for with cryptocurrency, none of what we store is identifying.

7. We allow P2P traffic in most locations. Yes, we provide port forwarding for all Pro users. Only ports above 1024 are allowed.

8. Stripe, Paypal, Coinpayments, Paymentwall. IP addresses of users are not stored or linked to payments.

9. The encryption parameters are similar for all protocols we support. AES-256 cipher with SHA512 auth and a 4096-bit RSA key. We recommend using IKEv2, as it’s a kernel space protocol that is faster than OpenVPN in most cases.

10. Our desktop apps have a built-in firewall that blocks all connectivity outside of the tunnel. In an event of a connection drop, it fails closed – nothing needs to be done. The firewall protects against all leaks, IPv4, IPv6 and DNS. We only support IPv4 connectivity at this time.

11. We lease servers in over 150 different datacenters worldwide. Some datacenters deploy networking monitoring for the purposes of DDOS protection. We request to disable it whenever possible, but this is not feasible in all places. Even with it in place, since most servers have dozens/hundreds of users connected to them at any given moment, your activity gets “lost in the crowd”. Each VPN server operates a recursive DNS server and performs all DNS resolution locally.

12. Our server overview is available here. We don’t offer virtual locations.

Windscribe website

VPNArea

1. We do not keep or record any logs. We are therefore not able to match an IP-address and a time stamp to a user of our service.

2. The registered name of our company is “Offshore Security EOOD” (spelled “ОФШОР СЕКЮРИТИ ЕООД” in Bulgarian). We’re a VAT registered business. We operate under the jurisdiction of Bulgaria.

3. To prevent email spam abuse we block mail ports used for such activity, but we preemptively whitelist known and legit email servers so that genuine mail users can still receive and send their emails.

To limit concurrent connections to 6, we use an in-house developed system that adds and subtracts +1 or -1 towards the user’s “global-live-connections-count” in a database of ours which the authentication API corresponds with anonymously each time the user disconnects or connects to a server. The process does not record any data about which servers the subtracting/detracting is coming from or any other data at any time, logging is completely disabled at the API.

4. We host our own email servers. We host our own Ticket Support system on our servers. The only external tools we use are Google Analytics for our website and Live Chat software.

5. DMCA notices are not forwarded to our users as we’re unable to identify a responsible user due to not having any logs or data that can help us associate an individual with an account. We would reply to the DMCA notices explaining that we do not host or hold any copyrighted content ourselves and we’re not able to identify or penalize a user of our service.

6. This has not happened yet. Should it happen our attorney will examine the validity of the court order in accordance with our jurisdiction, we will then inform the appropriate party that we’re not able to match a user to an IP or timestamp, because we’re not recording any logs.

7. BitTorrent and torrents in general are allowed on all our servers. We offer port forwarding only on the dedicated IP private VPN servers at the moment with the goal to allow it on shared servers too. The only ports which are blocked are those widely related to abuse, such as spam.

8. We accept PayPal, Credit/Debit cards, AliPay, Bitcoin, Bitcoin Cash, Ethereum, WebMoney, GiroPay, and bank transfers. In the case of PayPal/card payments, we link usernames to the transactions so we can process a refund. We do take active steps to make sure payment details can’t be linked to account usage or IP assignments. In the case of Bitcoin, BCH, ETH we do not link usernames to transactions.

9. We use AES-256-CBC + SHA256 cipher and RSA4096 keys on all our OpenVPN servers without exception. We also have Double VPN servers, where for example the traffic goes through Russia and Israel before reaching the final destination. We also have Tor over VPN servers to provide diversity in the anonymous setup a user prefers.

10. Yes, we provide both KillSwitch and DNS Leak protection. We actively block IPv6 traffic to prevent IP leaks, so connections are enforced via IPv4.

11. We use our own no-logs DNS servers. We work with reliable and established data centers. Nobody but us has virtual access to our servers. The entire logs directories are wiped out and disabled, rendering possible physical brute force access to the servers useless in terms of identifying users.

12. All our servers are physically located in the stated countries. A list of our servers in 60+ countries is available here.

VPNArea website

AirVPN

1. No, we do not keep or share with third parties ANY data that would allow us to match an IP address and a timestamp to a current or former user of our service

2. AirVPN in Italy. No parent company/companies.

3. No tools are used.

4. No, we do not use any external email providers, analytics, or support tools that hold information provided by users.

5. They are ignored if they pertain to P2P, they are processed, verified and handled accordingly (rejected or accepted) if they pertain to web sites (or FTP services etc.) hosted behind our VPN servers.

6. a) We would co-operate to the best of our abilities, although we can’t give out information we don’t have. b) We are unable to comply due to technical problems and limitations. c) The scenario in ‘case b’ has never occurred. The scenario in ‘case a’ has occurred multiple times, but our infrastructure does not monitor, inspect or log customers’ traffic, so it is not possible to correlate customer information (if we had it) with customers’ traffic and vice-versa.

7. a) Yes, BitTorrent and other file-sharing traffic is allowed on all servers. AirVPN does not discriminate against any protocol or application and keeps its network as agnostic as possible. b) Yes, we provide remote inbound port forwarding service. c) Outbound port 25 is blocked.

8. We accept payments via PayPal and all major credit cards. We also accept Bitcoin, Ethereum, Litecoin, Bitcoin Cash, Dash, Doge, and Monero. By accepting directly various cryptocurrencies without intermediaries we get rid of privacy issues, including correlations between IP addresses and payments. By accepting directly Monero we also offer the option to our customers to pay via a cryptocurrency which protects transactions with a built-in layer of anonymity.

9. CHACHA20-POLY1305 and AES-256-GCM

10. We provide Network Lock in our free and open-source software. It can prevent traffic leaks (both IPv4 and IPv6 – DNS leaks included) even in case of application or system processes wrong binding, in case of UPnP caused leaks, wrong settings, WebRTC and other STUN related methods, and of course in case of unexpected VPN disconnection. b) Yes, we do provide DS IPv4/IPv6 access, including IPv6 over IPv4, pure IPv4 and pure IPv6 connections. In this way even customers whose ISP does not support IPv6 can access IPv6 services via AirVPN.

11. We do not own our datacenters and we are not a transit provider, so we buy traffic from Tier 1, Tier 2 and only occasionally Tier 3 providers and we house servers in various datacenters. The main countermeasures are: exclusive access to IPMI etc. via our own, external IP addresses or specific VPN for the IPMI etc.; reboot inhibition (requiring remote validation); some other methods we will not reveal. However, if servers lines are wiretapped externally and transparently, and server tampering does not occur, there is no way inside the server to prevent, or be aware of, ongoing wiretapping. Wiretapping prevention must be achieved with other methods on the client-side (some of them are integrated into our software), for example, VPN over Tor, Tor over VPN etc.

12. NO, we do not offer virtual locations and/or VPS. We declare only real locations of real “bare metal” servers.

AirVPN website

CactusVPN

1. No, we don’t keep any information of this type.

2. CactusVPN Inc., Canada

3. We restrict our services with up to 5 devices per package for VPN connections and to unlimited devices for our SmartDNS service as long as all of them have the same IP address. Abuse of services is regulated by our Linux firewall and most of the datacenters we hire servers from provide additional security measures for server attacks.

4. No

5. We did not receive any official notices yet. We will only respond to a local court order.

6. If we have a valid order from Canadian authorities we have to help them identify the user. Bus as we do not keep any logs we just can’t do that. We did not receive any orders yet.

7. BitTorrent and other file-sharing traffic is allowed on Netherlands, Germany, Switzerland, Spain, Latvia and Romanian servers.

8. PayPal, Visa, MasterCard, Discover, American Express, Bitcoin & Altcoins, Alipay, Qiwi, Webmoney, Boleto Bancario, Yandex Money and other less popular payment options.

9. We recommend users to use SoftEther with ECDHE-RSA-AES128-GCM-SHA256 cipher suite.

10. Yes, our apps include Kill Switch and Apps. Killer options in case a VPN connection is dropped. Also, they include DNS Leak protection. We only support IPv4.

11. We use servers from various Data Centers. All the VPN traffic is encrypted so the datacenters cannot see the nature of the traffic, also the access on all servers is secured and no datacenter can see its configuration.

12. Here’s the link to all our servers.

CactusVPN website

Trust.Zone

1. Trust.Zone doesn’t store any logs. Therefore, we have no data that could be linked and attributed to the current or former user. All we need from customers is an email to sign up.

2. Trust.Zone is under Seychelles jurisdiction. The company is operated by Internet Privacy Ltd.

3. Our system can understand how many active sessions a given license has at a given moment in time. This counter is temporarily placed in RAM and never logged or saved anywhere.

4. Trust.Zone has never used any third-party tools like Google Analytics, live chat platform, support tools or other.

5. If we receive any type of DMCA requests or Copyright Infringement Notices – we ignore them. Trust.Zone is under offshore jurisdiction, out of 14 Eyes Surveillance Alliance. There is no data retention law in Seychelles.

6. A court order would not be enforceable because we do not log information and therefore there is nothing to be had from our servers. Trust.Zone supports Warrant Canary. Trust.Zone has not received or been subject to any searches, seizures of data, or requirements to log any actions of our customers.

7. BitTorrent and file-sharing traffic is allowed on all Trust.Zone servers. Moreover, we don’t restrict any kind of traffic. Trust.Zone does not throttle or block any protocols, IP addresses, servers or any type of traffic whatsoever.

8. All major credit cards are accepted. PayPal, Alipay, wire transfer, and many other types of payments are available. As we don’t store any logs, there is no way to link payment details with user’s internet activity

9. We use the most recommended protocols in the VPN industry – IKEv2/IPSec, OpenVPN. We also support our own protocol which is faster than OpenVPN and also includes Perfect Forward Secrecy (PFS). Trust.Zone uses AES-256 Encryption by default.

10. Trust.Zone supports a kill-switch function. We also own our DNS servers and provide users with the ability to use our DNS to avoid any DNS leaks. All features listed above are also available with a 30-day Free Plan. Trust.Zone does not support IPv6 to avoid any leaks. We also provide users with additional recommendations to be sure that there are no DNS leaks or IP leaks.

11. We have a mixed infrastructure. Trust.Zone owns some physical servers and we have access to them physically. In locations with lower utilization, we normally host with third-parties. But the most important point is that we use dedicated servers in this case only, with full control by our network administrators. DNS queries go through our own DNS servers.

12. We are operating with 175+ dedicated servers in 93 geo-zones and are still growing. We also provide users with dedicated IP addresses if needed. The full map of the server locations is available here.

Trust.Zone website

SwitchVPN

1. No, SwitchVPN does not store any logs which would allow anyone to match an IP address and a time stamp to a current or former user of our services.

2. Our company name is “CS SYSTEMS, INC” and it comes under United States jurisdiction.

3. We pro-actively take steps to mitigate abuse of our service/servers by implementing certain firewall rules. Such as blocking default SMTP ports which are likely to be abused by spammers.

4. We use Chatra for providing Live Chat and our web-based ticketing system which is self-hosted. No personal information is collected.

5. SwitchVPN is transitory digital network communications as per 17 U.S.C § 512(a) of the Copyright Act. So in order to protect the privacy of our users we use shared IP addresses, which makes it impossible to pinpoint any specific user. If the copyright holder only provides us with an IP address as identifying information, then it is impossible for us to associate a DMCA notice with any of our users.

6. There have been no court orders since we started our operation in 2010, and as we do not log our users’ sessions and we utilize shared IP addresses, it is not possible to identify any user solely based on timestamps or IP addresses. Currently, there are no mandatory data logging requirements in the United States but in case the situation changes, we will migrate our company to another privacy-friendly jurisdiction.

7. Yes, We have P2P optimized servers that provide dynamic port forwarding. It can be easily filtered in our VPN application.

8. We accept all major payment methods such as Credit Card, PayPal, Bitcoin and other Crypto Currencies. We use shared IPs and every account is assigned an alias username for connecting to the VPN server.

9. SwitchVPN utilizes AES-256bit encryption with SHA512 Authentication Channel by default.

10. Yes, Kill Switch & DNS Leak protection is provided on our Windows and Mac application. Currently we only support IPv4.

11. Before we get into an agreement with any third party, we make sure the company does not have any poor history for privacy and we make sure the company is in-line with our privacy requirements for providing our users with a no-log VPN service. We also use our own DNS servers to anonymize all DNS requests.

12. All of our servers are physically located in the countries we have mentioned, we do not use virtual locations.

SwitchVPN website

PrivateVPN

1. We DO NOT keep any logs. We do not store logs relating to traffic, session, DNS, or metadata.

2. We’re registered in Sweden under the name “Privat Kommunikation Sverige AB”

3. The nature of our VPN service makes it practically impossible for us to do any sort of monitoring of abuses. We do monitor the realtime state of the total amount of connections per user account as we allow 6 connections simultaneously. This specific information is never stored.

4. We are using LAdesk support tools, included ticket system and Live Chat. They remain on the chat server for the duration of the chat session, then optionally sent by email to a user, and then destroyed.

5. Since we don’t keep any information on any of our servers DMCA is not applicable to our service as it is not a codified law or act under Swedish jurisdiction

6. We don’t retain or log any identifiers at all. So, basically even when ordered to actively investigate a user we are limited to the number of active logins which is just a numerical value. That being said, we have not received a court order to date

7. P2P is allowed on all our servers as a matter of policy. We are not in the business of restricting and throttling things. The whole point of a user connecting to our VPN servers is to get uncensored and unrestricted Internet. We do support port forwarding with one open port to all ports opened.

8. We accept all forms of Credit/Debit card payments through the Stripe payment gateway, PayPal payment method, and Bitcoins. A credit card or a PayPal payment has to be linked to a user account for us to be able to refund a customer due to our 30-day money-back guarantee. More important, a VPN IP can’t be linked to a user account.

9. OpenVPN over UDP with 256-bit security for both data and TLS control channel encryption and Wireguard.

10. Our Windows and macOS VPN app offers a robust Kill switch and DNS leak protection. DNS leaks on any major platform are owing to broken installations which are fixed as soon we see a report or any issues. IPv6 leak protection is available on every platform and multiple VPN protocols. We offer guides and instructions to set up a kill switch on macOS, GNU/Linux, and Android. At this stage, we do not support any Dual Stack IPv4/IPv6 functionality.

11. We have physical control over our servers and network in Sweden, Denmark, Germany, Netherlands, United Kindom – London, Netherlands, France Italy, Spain, Switzerland, USA – NYC – LA, and Canada – Toronto as those locations and networks are 100% managed and owned by PrivateVPN. With all other locations, we use a variation of different hosting providers such as M247. All inbound and outbound traffic is encrypted and can’t be inspected. Yes, each VPN server has its own DNS server which is pushed to the VPN client.

12. We use a mix of physical and virtual servers depending on the demand and needs of a given location. Virtual servers are categorized in our server list on our website to avoid confusion and maintain transparency.

PrivateVPN website

WhatTheServer

1. We do not maintain any logs that would allow us to identify a user.

2. What The * Services, LLC is incorporated in the USA.

3. As mentioned above we do not log. We have no way to log bandwidth. All limiting is done by active sessions to prevent one person from sharing an account with hundreds of people. We use a custom session management system that operates completely on real-time data and keeps no logs.

4. We run our own communications infrastructure. No analytics are used currently.

5. We send out the below response as we have no logs. “Thanks for the note today. Just for clarification to you (‘InsertDatacenterNameHere’) and you only (this message is not for distribution); the operator(s) of the named network(s) within the notification provide no validation of any claim(s) made on behalf of an ‘abuse’ complainant. The operator(s) of this network, hosts, and network devices have no knowledge of any activities named in the complaint and operate in the absence of logs, records, or other commonly used identifying materials. We appreciate you (‘InsertDatacenterNameHere’) bringing such items to our attention, and if we are able to assist in any way in the future, please let us know. Thanks. This ticket may be closed upon receipt and review.”

6. We have only had one of these requests for a VPS client. We responded by replying to the requester letting them know we were looking into it, and we notified the customer via his email on file. Then we contacted the EFF and they put us in touch with a lawyer who helped us get the case dropped, because we did not have the information requested. If we do have another request in the future we will take several steps. First, we would consult with our lawyers to confirm the validity of the order/subpoena, and respond accordingly if it is NOT a valid order/subpoena. Then we would alert our user of the event if we are legally able to.

If the order/subpoena is valid, we would see if we have the ability to provide the information requested, and respond accordingly we do NOT have the information requested. If we DO have the information requested,
we would immediately reconfigure our systems to stop keeping that information. Then we would consult with our lawyer to determine if there is anyway we can fight the order/subpoena and/or what is the minimum
level of compliance we must meet, as well as, notify the user of the event if we are legally able to do so. If we were forced to start keeping logs on our users, we would go out of business and start a new company in a different jurisdiction.

7. We allow file sharing on our network. We do ask people to use the EU nodes for file-sharing. We have no way to enforce that, but it helps to prevent the USA-based nodes from complaints and shutdown from overzealous copyright trolls. We do offer port forwarding plans with our Perfect Dark Plans. We do not block any ports or monitor.

8. We accept PayPal and Cryptocurrency. All that is required is a working email for signup. Signups via Tor or proxies are highly encouraged along with placeholder information if paying in cryptocurrency. We also use a completely different authentication infrastructure and random usernames for the VPN accounts.

9. We recommend OpenVPN and Our VPN has Perfect Forward Secrecy setup with ECDHE-RSA-AES256-GCM-SHA384 for all our VPN servers which is based on Softether and Ubuntu which allows people to use any protocols their devices supports. This ensures maximum compatibility and the best protection for all.

10. Our VPN profiles are compatible with Qomui (Qt OpenVPN Management UI) and others that have this built into the opensource VPN client. We push custom Adblocking DNS to clients. We also have ‘push “block-outside-dns”’ in our OpenVPN server config files which will prevent the client from leaking DNS requests. Additionally, we include “resolve-retry infinite” and “persist-tun” in the OpenVPN client config files which will prevent the client from sending data in the clear if the VPN connection goes down. We do have dual-stack IPv4/IPv6 support which can be used if IPv6 is enabled on the device.

11. All of our infrastructure is hosted in third-party colocations. However, we use full-disk-encryption on all of our servers. We also use custom DNS servers with adblocking to mitigate tracking from ad networks. We notice this also speeds up mobile devices and removes ads from lots of the apps without paid ad-free versions.

12. We offer VPN server locations in US,NL,UK,HK,JP. We do offer virtual locations upon request.

WhatTheServer website

ibVPN

1. We do not keep and we do not share with third parties ANY logs that can identify a user of our service with an IP address and/or a timestamp. We are also GDPR compliant and (in our opinion) keeping this kind of logs is not respecting the Privacy by Design guidelines.

2. The company’s registered name is Amplusnet SRL. We are a Romanian company, which means we are under EU jurisdiction. In Romania, there are no mandatory data retention directives.

3. We limit the number of concurrent connections and we are using Radius for this purpose.

4. The back end of the website is a dedicated WHMCS for billing and support tickets. We do not use external email providers (we host our own mail server). Our users can contact us via live chat (Zendesk). The chat activity logs are deleted on a daily basis. There is no way to associate any information provided via live chat with the users’ accounts.

5. So far we did not receive any DMCA notice for any P2P server from our server list. That is normal considering that the servers are located in DMCA-free zones. For the rest of the servers, P2P and file-sharing activities are not allowed/supported.

6. So far, we have not received any court order. We do not support criminal activities, and in case of a valid court order, we must follow the EU laws under which we operate.

7. We have dedicated P2P servers that allow BitTorrent and other file-sharing applications. The servers are located in Netherlands, Luxembourg, Canada, Sweden, Russia, Hong Kong and Lithuania. We do not reroute P2P connections. We do not provide port forwarding. We are blocking the SMTP ports 25 and 465 to avoid spam from our servers.

8. Payments are performed exclusively by third-party processors, thus no credit card info, PayPal ids, or other identifying info are stored in our database. For those who would like to keep a low profile, we accept BitCoin, LiteCoin, Ethereum, WebMoney, Perfect Money etc.

9. We support SSTP and SoftEther on most of the servers. We also offer double VPN and TOR over VPN.

10. Yes, Kill Switch and DNS leak protection are implemented in our VPN clients. Kill Switch is one of the most-used features. Our users can decide to block all the traffic when the VPN connection drops or to kill a list of applications. We allow customers to disable IPv6 traffic and to make sure that only our DNS servers are used while connected to the VPN. Also, we support SOCKS5 on our P2P servers which can be used for downloading torrents and do not leak any data if the connection to the SOCKS5 proxy drops.

11. We do not have physical control over our VPN servers. We have full remote control to all servers. Admin access to servers is not provided for any third-party.

12. The full list of server locations is available here.

ibVPN website

Mullvad

1. No, all details are explained in our no-logging data policy.

2. Mullvad VPN AB – Swedish. Parent company is Amagicom AB – Swedish.

3. We mitigate abuse by blocking the usage of ports 25, 137,139, and 445 due to email spam and Windows security issues. The number of connections: Each VPN server reports to a central service. When a customer connects to a VPN server, the server asks the central service to validate the account number, whether or not the account has any remaining time, if the account has reached its allowed number of connections, and so on. Everything is performed in temporary memory only; none of this information is permanently stored to disk.

We also monitor the real-time state of total connections per account as we only allow for five connections simultaneously. As we do not save this information, we cannot, for example, tell you how many connections your account had five minutes ago.

4. We have no external elements at all on our website. We do use an external email provider; for those who want to email us, we encourage them to use PGP encryption which is the only effective way to keep email somewhat private. The decrypted content is only available to us.

5. As explained here, there is no such Swedish law that is applicable to us.

6. From time to time, we are contacted by governments asking us to divulge information about our customers. Given than we don’t store activity logs of any kind, we have no information to give out. Worst-case scenario: we would discontinue the servers in the affected countries. The only information AT ALL POSSIBLE for us to give out is records of payments since these are stored at PayPal, banks etc.

7. All traffic is treated equally, therefore we do not block or throttle BitTorrent or other file-sharing protocols. Port forwarding is allowed. Ports 25, 137,139, and 445 are blocked due to email spam and Windows security issues.

8. We accept cash, Bitcoin, Bitcoin Cash, bank wire, credit card, PayPal, and Swish. We encourage anonymous payments via cash or one of the cryptocurrencies. We run our own full node in each of the blockchains and do not use third parties for any step in the payment process, from the generation of QR codes to adding time to accounts. Our website explains how we handle payment information.

9. We offer OpenVPN with RSA-4096 and AES-256-GCM. And we also offer WireGuard which uses Curve25519 and ChaCha20-Poly1305.

10. We offer a kill switch and DNS leak protection, both of which are supported in IPv6 as IPv4. While the kill switch is only available via our client/app, we also provide a SOCKS5 proxy that works as a kill switch and is only accessible through our VPN.

11. At 12 of our locations (4 in Sweden, 1 in Denmark, 1 in Amsterdam, 1 in Norway, 1 in UK, 1 in Finland, 1 in Germany, 1 in Paris, 1 in Zurich) we own and have physical control over all of our servers. In our other locations, we rent physical, dedicated servers (which are not shared with other companies) and bandwidth from carefully selected providers. Keep in mind that we have 5 locations in the UK and 3 in Germany, the servers we physically own are the ones hosted by 31173.se (they start with gb-lon-0* and de-fra-0* , and gb4-wireguard, gb5-wireguard, de4-wireguard and de5-wireguard).

Yes, we use our own DNS servers. All DNS traffic routed via our tunnel is ‘hijacked’, even if you accidentally select another DNS our DNS will anyhow be used. Except if you have setup DNS over HTTPS or DNS over TLS.

12. We don’t have virtual locations. All locations are listed here.

Mullvad website

TorGuard

1. TorGuard has never kept or retained logs for any user. No timestamps or IP logs are kept on any VPN or authentication server. The only information TorGuard has is statistical network data which helps us to determine the load of a given server.

2. TorGuard is owned by VPNetworks LLC and its parent company Data Protection Services. We operate under US jurisdiction.

3. We use custom modules in a platform called Nagios to monitor VPN/Proxy hardware utilization, uptime and latency. TorGuard does enforce an eight device per user limit in real-time and each session is immediately wiped once the user has logged out. If that user failed to logout or was disconnected accidentally, our system automatically discards these stale sessions within a few minutes.

4. We use Google Apps for email and anonymized Google Analytics data for performance reporting. All support is handled internally and TorGuard does not utilize third-party tools for customer support.

5. If a valid DMCA takedown notice is received it would be handled by our legal team. Due to our no-log policy and shared IP network, we are unable to forward any requests to a single user.

6. If a court order is received, it is first handled by our legal team and examined for validity in our jurisdiction. Should it be deemed valid, our legal representation would be forced to further explain the
nature of our shared IP network configuration and the fact that we do not hold any identifying logs or time stamps.

TorGuard’s network was designed to operate with minimum server resources and is not physically capable of retaining user logs. Due to the nature of shared VPN servers and the large traffic volume flowing through our network, it would not be possible to retain such logs. No, that scenario has never played out.

7. Yes, torrents work on all servers except our residential IP network as these are performance optimized for specific streaming platforms. TorGuard does offer port forwarding for all ports above 2048 and the only port we block outgoing is SMTP port 25 to prevent abuse.

8. We use Stripe for credit or debit card processing and utilize our own BTCPay instance for Bitcoin and Litecoin transactions. TorGuard accepts all cryptocurrency through coinpayments.net and use Paymentwall and PayGarden for Gift Card payments. TorGuard has gone through extreme measures by heavily modifying our billing system to work with various payment providers and to help protect our users’ privacy.

9. For a high level of security, we would recommend using OpenVPN with AES-256-GCM-SHA512 using our stealth VPN protocol as an added measure through the TorGuard desktop or mobile apps.

10. Yes – our kill switch is uniquely designed to send all traffic into a *black hole* if the user loses connectivity or the app crashes for any reason. Dual stack IPv4/IPv6 is currently in development and will be released very soon.

11. We do have servers hosted at third parties but only select a location after extensive due diligence on very specific security criteria. We encrypt all disks and run 80% so far on virtual RAM disks. We do provide secure public DNS but we also provide our internal DNS on every endpoint which queries root VPN servers directly.

12. At this time we have three virtual locations: Taiwan, Greece and Mexico. TorGuard would rather not provide any virtual locations but occasionally if we cannot find a bare-metal data center that meets our security criteria we won’t take the risk.

TorGuard website

Perfect Privacy

1. We do not store or log any data that would indicate the identity or the activities of a user.

2. The name of the company is VECTURA DATAMANAGEMENT LIMITED COMPANY and the jurisdiction is Switzerland.

3. The number of connections/devices at the same time is not limited because we do not track it. In case of malicious activity towards specific targets, we block IP addresses or ranges, so they are not accessible from our VPN servers. Additionally, we have limits on new outgoing connections for protocols like SSH, IMAP, and SMTP to prevent automated spam and brute force attacks. We do not use any other tools.

4. Our websites use Google Analytics to improve the quality of the user experience and it’s GDPR compliant with anonymized IP addresses. You can prohibit tracking with just one click on a provided link in the privacy policy. If a customer has a problem with Google, he has the possibility to disable the tracking of all Google domains in TrackStop. I believe we are the only VPN provider who offers this possibility. All other solutions like email, support and even our affiliate program is in-house software and under our control.

5. Because we do not host any data, DMCA notices do not directly affect us. However, we generally answer inquiries. We point out that we do not keep any data that would allow us to identify a user of the used IP address.

6. If we receive a Swiss court order, we are forced to provide the data that we have. Since we don’t log any IP addresses, timestamps or other connection-related data, the only step on our side is to inform the inquiring party that we do not have any data that would allow the identification of a user based on that data. Should we ever receive a legally binding court order that would require us to log the activity of a user going forward, we’d rather shut down the servers in the country concerned than compromise our user’s privacy.

There have been incidents in the past where Perfect Privacy servers have been seized, but no user information was compromised that way. Since no logs are stored in the first place and additionally all our services are running within RAM disks, a server seizure will never compromise our customers. Although we are not subject to US-based laws, there’s a warrant canary page available.

7. With the exception of our US servers and French servers, BitTorrent and other file-sharing software is allowed. We offer port forwarding and do not block any ports.

8. We offer Bitcoin, PayPal and credit cards for users who prefer these options and over 60 other payment methods. Of course, it is guaranteed that payment details are not associated with any IP addresses. The only
thing you know about a person is that he or she is a customer of Perfect Privacy and which email address was used.

9. The most secure protocol we recommend is still OpenVPN with 256-bit AES-GCM encryption. With our VPN Manager for Mac and Windows you also have the possibility to create cascades over four VPN servers. This Multi
Hop feature works tunnel in tunnel. If you choose countries for the hops which are known not to cooperate with each other, well you get the idea. On top of that you can activate our NeuroRouting feature, which changes the routing depending on the destination of the visited domain and dynamically selects different hops for the outgoing server to ensure it is geographically close to the visited server.

10. Yes, our servers support full Dual Stack IPv4/IPv6 functionality, even when your ISP does not support IPv6. Our VPN Manager has a “kill switch” which has configurable protection with three security levels.

11. We run dedicated bare-metal servers in various data centers around the world. While we have no physical access to the servers, they all are running within RAM disks only and are fully encrypted.

12. Currently, we offer servers in 26 countries worldwide. All servers are located in the city displayed in the hostname – there are no virtual locations. For full details about all servers locations, please
check our server status site as we are constantly adding new servers.

Perfect Privacy website

SlickVPN

1. SlickVPN doesn’t log traffic or session data of any kind. We don’t store connection time stamps, used bandwidth, traffic logs, or IP addresses.

2. Slick Networks, Inc. is our recognized corporate name. We operate a complex business structure with multiple layers of offshore holding companies, subsidiary holding companies, and finally some operating companies to help protect our interests. The main marketing entity for our business is based in the United States of America but the top level of our operating entity is based out of Nevis.

3. We block port 25 to reduce the likelihood of spam originating from our systems. The SlickVPN authentication backend is completely custom and limits concurrent connections.

4. We utilize third party email systems to contact clients who opt-in for our newsletters and Google Analytics for basic website traffic monitoring and troubleshooting. We believe these platforms to be secure. Because we do not log your traffic/browsing data, no information about how users may or may not use the SlickVPN service is ever visible to these platforms.

5. If a valid DMCA complaint is received while the offending connection is still active, we stop the session and notify the active user of that session. Otherwise, we are unable to act on any complaint as we have no way of tracking down the user. It is important to note that we rarely receive a valid DMCA complaint while a user is still in an active session.

6. This has never happened in the history of our company. Our customer’s privacy is of topmost importance to us. We are required to comply with all valid court orders. We would proceed with the court order with complete transparency, but we have no data to provide any court in any jurisdiction. SlickVPN uses a warrant canary to inform users if we have received any such requests from a government agency. Users can monitor our warrant canary here: SlickVPN Warrant Canary.

7. Yes. All traffic is allowed. SlickVPN does not impose restrictions based on the type of traffic our users send. Outgoing mail is blocked but we offer a method to split tunnel the mail out if necessary. We can forward ports upon request. Some incoming ports may be blocked with our NAT firewall but these can be opened on request

8. We accept PayPal, Credit Cards, Bitcoin, Cash, and money orders. We keep user authentication and billing information on independent platforms. One platform is operated out of the United States of America (marketing) and the other platform is operated out of Nevis (operations).

Payment details are held by our marketing company which has no access to the operations data. We offer the ability for the customer to permanently delete their payment information from our servers at any point and all customer data is automatically removed from our records shortly after the customer ceases being a paying member.

9. We recommend using OpenVPN if at all possible (available for Windows, Apple, Linux, iOS, Android) and we use the AES-256-CBC algorithm for encryption.

10. Our leak protection (commonly called a ‘kill-switch’) keeps your IPv4 and IPv6 traffic from leaking to any other network and protects against DNS leaks. Your network will be disabled if you lose the connection to our servers and the only way to restore the network is manual intervention by the user. We don’t offer IPv6 connections at this time.

11. We physically control some of our server locations where we have a heavier load. Other locations are hosted with third parties unless there is enough demand in that location to justify racking our own server setup. To ensure redundancy, we host with multiple providers in each location. We have server locations in over forty countries.

In all cases, our network nodes load over our encrypted network stack and run from RAMDisk. Anyone taking control of the server would have no usable data on the disk. We periodically remount our ramdisks to remove any lingering data. Each of our access servers acts as the DNS server for customers connected to that node.

12. SlickVPN offers VPN service in 40 countries around the world. We do not do offer virtual locations.

SlickVPN website

HeadVPN

1. We do not keep any logs on our network servers that can match an IP address and time stamp with a user.

2. Our service is incorporated under a company in Seychelles for our users’ security and anonymity. The company name is Global Stealth, Inc.

3. There are no such limits on our network.

4. Yes, we are using Google Analytics for our website traffic analysis. We also use Zendesk for chat platform.

5. We don’t receive DMCA notices as we have a special server network in DMCA-free zones.

6. It will be basically ignored.

7. BitTorrent and P2P are allowed on our special networks designed for this purpose. These networks have all ports open.

8. We support credit card and PayPal. Payments can be linked to accounts.

9. We support AES256 SSL encryption supported protocols over multiple ports.

10. Yes, we do support Kill Switch for our users.

11. All our servers are hosted on globally known data centers with high security. We have our global DNS and SmartDNS network.

12. We have servers in more than 80 countries globally.

HeadVPN website

VPNhub

1. We do not keep any logs of data transmitted through our service and we have no way of knowing what our users are doing while connected to our servers. However, we will note that all payment processors store IP data for the purpose of fraud mitigation. Our payment processor is no different.

2. We operate under AppAtomic, physically headquartered with personnel in Cyprus. We also have offices in Montreal where sales, development, and support take place.

3. We have proprietary systems being used to mitigate abuse, but don’t enforce limitations on concurrent connections at the current time.

4. We use Google’s Firebase and Analytics for basic statistical reporting, however, those services do not have access to data transferred by our users. ZenDesk is currently employed to provide support, however, we plan on migrating everything in-house in the near future.

5. Since we keep no logs, there is virtually nothing we can do to respond to DMCA or equivalent inquiries.

6. Since we do not log activity, we have no way of identifying users. In the event that we are somehow forced to log activity for a user going forward, it would be reflected in the Warrant Canary within our Privacy Policy.

7. We do not restrict torrents, file-sharing or P2P.

8. We use ProBiller as a payment provider on our web site, as well as Apple and Google within our iOS and Android apps respectively. Since we have no logs, there is never anything that can be linked to usage of our service nor IP assignment.

9. It depends on the platform. Open VPN and IKEv2 are both considered to be the best in the industry.

10. We have a kill-switch feature within our desktop apps, as well as our Android app. For iOS, incorporating a kill-switch is not possible due to operating system restrictions, but we do have an Auto-Reconnect upon Disconnect feature there.

11. We’ve contracted StackPath for the purpose of network infrastructure. Our agreement forbids the snooping of any traffic, and we use DNS servers they host.

12. Here’s a full list.

VPNhub website

CyberGhost

1. We have a strict No-Logs policy, so none of our traffic or DNS servers log or store any user info.

2. We’re part of Kape.

3. Our dedicated team monitors the whole service and infrastructure for any abuse of service. We have several tools in place, from CDN protection to firewalls and our own server monitoring system. Concurrent connections limits are monitored & also enforced via our systems to avoid such types of abuses.

4. We use Google Analytics, Zendesk, and Active Campaign.

5. Back in 2011, we were the first in the VPN industry to publish a Transparency Report. It’s something we still do today when we launch our reports quarterly. When we receive a lot of DMCA takedown notices our reply is always the same: we keep no logs and cannot comply with the request.

6. Since we store no logs, such requests do not affect us. Under Romanian law, data retention is not mandatory. This allows us to give our ‘Ghosties’ complete digital privacy.

7. In some countries, local legislation prevents us from offering adequate service for torrenting. Other locations have performance constraints. We currently do not support port forwarding services. What’s more, specific ports related to email services are also blocked as an anti-spam security measure.

8. We do not any store payment details. These are handled by our payment providers, which are entirely Payment Card Industry Data Security Standard compliant.

9. We generally favor the AES-256 encryption platform & protocol wide for its good balance of performance and security.

10. Yes, we have a kill switch in place, but we do not support dual stack.

11. We use disk encryption to make sure no third party can access the contents of our VPN servers. Furthermore, we have additional server authenticity tests in place to eliminate the risk of Man-in-the-middle attacks. We use self-managed DNS servers to ensure the E2E protection of online activity.

12. We have over 6,500 VPN servers in 90 countries. Most of them are physically located within the borders of the specified country. All details are available here.

CyberGhost website

OVPN

1. Our entire infrastructure and VPN service is built to ensure that no logs can be stored – anywhere. Our servers are locked in cabinets and operate without any hard drives. We use a tailored version of Alpine, which doesn’t support SATA controllers, USB ports etc.

2. OVPN Integritet AB (Org no. 556999-4469). We operate under Swedish jurisdiction.

3. We don’t monitor abuse. In order to limit concurrent connections, our VPN servers validate account credentials by making a request to our website. Our web server keeps track of the number of connected devices. This is stored as a value of 0-4, where it is increased by one when a user connects and decreased by one when a user disconnects.

4. For website insights, we use Matomo/Piwik, an Open Source solution that we host ourselves. The last two bytes of visitors’ IP addresses are anonymized; hence no individual users can be identified. Automatic emails from the website are sent using Postmark. Intercom is used for support.

5. Since we don’t store any information, such requests aren’t applicable to us.

6. We can’t provide any information to the court. A court wouldn’t be able to require logging in our jurisdiction – but in case it did happen we would move the company abroad. OVPN has insurance that covers legal fees as an additional layer of safety, which grants us the financial muscles to refute any requests for information.

7. We don’t do any traffic discrimination. As such, BitTorrent and other file-sharing traffic are allowed on all servers. We do provide port forwarding services as incoming ports are blocked by default. The allowed port range is 49152 to 65535. For other ports, we recommend users to purchase our Public IPv4 add-on.

8. PayPal, credit cards (via Braintree), Bitcoin (via Bitpay), Bitcoin Cash (via Bitpay), cash in envelopes as well as a Swedish payment system called Swish. We never log IP addresses of users, so we can’t correlate an IP address to a payment.

9. OVPN’s default settings, which uses AES-256-GCM for OpenVPN. In terms of connection, we recommend using our Multihop add-on.

10. Our desktop client provides a kill switch as well as DNS leak protection. All our servers support dual-stack IPv4 & IPv6. Our browser extension blocks WebRTC leaks.

11. We own all the servers used to operate our service. All VPN servers run without any hard drives – instead we use tmpfs storage in RAM. Writing permissions for the OpenVPN processes have been removed, as well as syslogs. Our VPN servers do not support physical console access, keyboard access nor USB access. The servers are colocated in various data centers that meet our requirements. OVPN does not rent any physical or virtual servers. We operate our own DNS servers.

12. We do not offer any virtual locations. All our regions are listed here. We have photos of our servers at all locations, which are viewable by clicking on the region names

OVPN website

Surfshark

1. We do not keep any logs, data, timestamps or any other kind of information that would enable anyone to identify current or former users of our service.

2. Surfshark is a registered trademark of Surfshark Ltd., a company registered in the British Virgin Islands (BVI). Surfshark Ltd. is not a subsidiary of any other company.

3. We do not limit the number of simultaneous connections. We have safeguards against abuse of our service: our Terms of Service has a clause on Fair Usage Policy; if this policy is intentionally violated, we have an automated network maintenance system that indicates the abnormalities on server load, and can limit an immoderate number of devices simultaneously connected to one session to make sure that none of our customers are affected by potentially deteriorated quality of our services.

4. We do not use any Alphabet Inc. products except for Google Analytics, which is used to improve our website performance for potential customers. For a live 24/7 customer support and ticketing service, we use industry-standard Zendesk. For our communication, we use a secure email system Hushmail. For transactional communication, we use SendGrid and Iterable for user communication.

These third-party services have no access to any other kind of user information outside the scope of the one specified in our Privacy Policy. Also, we have legally binding agreements with all third-party service providers to not disclose any of the information they have to anyone outside the scope of the services they provide to us

5. DMCA takedown notices do not apply to our service as we operate outside the jurisdiction of the United States. In case we received a non-US equivalent, we would not be able to provide any information because we have none (strict no logs policy).

6. We have never received a court order from the British Virgin Islands (BVI) authorities. If we ever received a court order from the BVI authorities, we would truthfully respond that we are unable to identify any user as we keep no logs whatsoever. If data retention laws would be enacted in the BVI, we would look for another country to register our business in. For any information regarding received legal inquiries and orders we have a live warrant canary.

7. Surfshark is a torrent-friendly service. We allow all file-sharing activities and P2P traffic, including BitTorrent. For that, we have hundreds of specialized servers in various countries, and the user will always be connected to the fastest specialized server in case of P2P activities. We do not provide port forwarding services, and we block port 25.

8. Surfshark subscriptions can be purchased using various payment methods, including cryptocurrency, PayPal, Alipay, major credit cards, and many country-specific options. None of these payments can be linked to a specific user as we do not collect any timestamps, IP addresses, session information, or other data.

9. We recommend using advanced IKEv2/IPsec and OpenVPN (UDP and TCP) security protocols with strong and fast AES-256-GCM encryption and SHA512 signatures. Also, on our Windows and Android apps we support Shadowsocks protocol as an option. The AES-256-GCM is different from AES-256-CBC as it has an inbuilt authentication which makes the encryption process faster.

10. We provide ‘kill switches’ in all our apps and have an inbuilt DNS leak protection. Also, Surfshark provides IP masking, IPV6 leak protection, WebRTC protection, ad, malware and tracker blocking on DNS level, MultiHop (double VPN), Whitelister (works bots as direct and reverse split tunneling), etc. Currently, we do not support Dual Stack IPv4/IPv6 functionality.

11. We use our own DNS servers which do not keep any logs as per our Privacy Policy. All our servers are physically located in trusted third-party data centers. 80% of our servers are already RAM-only, and we’ll have a 100% RAM-only server network by the end of June 2020.

Before choosing a third-party service provider, we have a strict due diligence process to make sure they meet our security and trust requirements. To prevent unauthorized snooping, we use the 2FA method to reach our servers and have developed a special authorization procedure so that only authorized system administrators can access them for configurations.

12. As of May 2020, we have over 1700 servers physically located in 109 locations, in 64 countries. As per user requests, we have only a few virtual locations that are clearly indicated within our apps’ user interfaces.

Surfshark website

VPN.ac

vpn.ac logo1. We keep minimal connection session logs to help us in troubleshooting customers’ connection problems but also to identify attacks.

This information contains IP address, connection start and end time, protocol used (including port) and amount of data transferred for OpenVPN connections. This info isn’t stored on any server disk and is wiped out on session-end time or daily. For WireGuard connections, the endpoint IP (public user’s IP) is erased within a few minutes after closing the connection (no handshakes within a specific time).

2. Cryptolayer SRL, registered in Romania.

3. There are automated firewall rules that can kick-in in the event of some specific abusive activities. Manual intervention can take place when absolutely necessary, in order to maintain the infrastructure stable and reliable for everyone. Concurrent connections are limited by the authentication back-ends.

4. No, we don’t.

5. We are handling DMCA complaints internally without involving the users (i.e. we are not forwarding anything). We use shared IP addresses so it’s not possible to identify the users.

6. This has never happened. In such an event, we would rely on legal advice. It’s worth noting that we use shared public IPs on all servers so it’s not possible to identify a user based on past activity using a specific VPN gateway IP.

7. It is allowed on all servers. Port forwarding is not supported due to security and privacy weaknesses that come with it, ports aren’t blocked except for SMTP/25.

8. All popular cryptocurrencies, PayPal, credit cards, several country-specific payment methods, some gift cards. Crypto payments can be anonymous.

9. OpenVPN using Elliptic Curve Cryptography for Key Exchange (ECDHE, curve secp256k1) is used by default in most cases. We also support RSA-4096, SHA256 and SHA512 for digest/HMAC. For data encryption we use AES-256-GCM and AES-128-GCM. We are also supporting the WireGuard VPN protocol with its parameters (Curve25519, Blake2s, ChaCha20, Poly1305)

10. Yes, these features are embedded in our client software. We also provide guides and support on how to set effective “kill switches” for specific applications like torrent clients.

11. We have physical control over our servers in Romania. In other countries, we rent or collocate our hardware. We use our own DNS resolvers and all DNS traffic between VPN gateways and DNS resolvers is encrypted, not logged.

12. We don’t use “virtual locations”. All servers are physically located in several countries, a full list is available here.

VPN.ac website

—–

*Note: Private Internet access, ExpressVPN and NordVPN are TorrentFreak sponsors. We reserve the first three spots for them as a courtesy. This article also includes a few affiliate links which help us pay the bills. We never sell positions in our review article or charge providers for a listing.

All VPNs

Private Internet Access
ExpressVPN
NordVPN
HideIPVPN
IVPN
AzireVPN
Windscribe
VPNArea
Surfshark
AirVPN
CactusVPN
Trust.Zone
SwitchVPN
PrivateVPN
WhatTheServer
ibVPN
Mullvad
TorGuard
Perfect Privacy
SlickVPN
HeadVPN
VPNhub
CyberGhost
OVPN
VPN.ac

From: TF, for the latest news on copyright battles, piracy and more.

Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/how-to-wipe-a-mac-hard-drive/

erasing a hard drive and a solid state drive

What do I do with a Mac that still has personal data on it? Do I take out the disk drive and smash it? Do I sweep it with a really strong magnet? Is there a difference in how I handle a hard drive (HDD) versus a solid-state drive (SSD)? Well, taking a sledgehammer or projectile weapon to your old machine is certainly one way to make the data irretrievable, and it can be enormously cathartic as long as you follow appropriate safety and disposal protocols. But there are far less destructive ways to make sure your data is gone for good. Let me introduce you to secure erasing.

Which Type of Drive Do You Have?

Before we start, you need to know whether you have a HDD or a SSD. To find out, or at least to make sure, you click on the Apple menu and select “About this Mac.” Once there, select the “Storage” tab to see which type of drive is in your system.

The first example, below, shows a SATA Disk (HDD) in the system.

SATA HDD

In the next case, we see we have a Solid State SATA Drive (SSD), plus a Mac SuperDrive.

Mac storage dialog showing SSD

The third screen shot shows an SSD, as well. In this case it’s called “Flash Storage.”

Flash Storage

Make Sure You Have a Backup

Before you get started, you’ll want to make sure that any important data on your hard drive has moved somewhere else. OS X’s built-in Time Machine backup software is a good start, especially when paired with Backblaze. You can learn more about using Time Machine in our Mac Backup Guide.

With a local backup copy in hand and secure cloud storage, you know your data is always safe no matter what happens.

Once you’ve verified your data is backed up, roll up your sleeves and get to work. The key is OS X Recovery — a special part of the Mac operating system since OS X 10.7 “Lion.”

How to Wipe a Mac Hard Disk Drive (HDD)

NOTE: If you’re interested in wiping an SSD, see below.

    1. Make sure your Mac is turned off.
    2. Press the power button.
    3. Immediately hold down the command and R keys.
    4. Wait until the Apple logo appears.
    5. Select “Disk Utility” from the OS X Utilities list. Click Continue.
    6. Select the disk you’d like to erase by clicking on it in the sidebar.
    7. Click the Erase button.
    8. Click the Security Options button.
    9. The Security Options window includes a slider that enables you to determine how thoroughly you want to erase your hard drive.

There are four notches to that Security Options slider. “Fastest” is quick but insecure — data could potentially be rebuilt using a file recovery app. Moving that slider to the right introduces progressively more secure erasing. Disk Utility’s most secure level erases the information used to access the files on your disk, then writes zeroes across the disk surface seven times to help remove any trace of what was there. This setting conforms to the DoD 5220.22-M specification.

  1. Once you’ve selected the level of secure erasing you’re comfortable with, click the OK button.
  2. Click the Erase button to begin. Bear in mind that the more secure method you select, the longer it will take. The most secure methods can add hours to the process.

Once it’s done, the Mac’s hard drive will be clean as a whistle and ready for its next adventure: a fresh installation of OS X, being donated to a relative or a local charity, or just sent to an e-waste facility. Of course you can still drill a hole in your disk or smash it with a sledgehammer if it makes you happy, but now you know how to wipe the data from your old computer with much less ruckus.

The above instructions apply to older Macintoshes with HDDs. What do you do if you have an SSD?

Securely Erasing SSDs, and Why Not To

Most new Macs ship with solid state drives (SSDs). Only the iMac and Mac mini ship with regular hard drives anymore, and even those are available in pure SSD variants if you want.

If your Mac comes equipped with an SSD, Apple’s Disk Utility software won’t actually let you zero the hard drive.

Wait, what?

In a tech note posted to Apple’s own online knowledgebase, Apple explains that you don’t need to securely erase your Mac’s SSD:

With an SSD drive, Secure Erase and Erasing Free Space are not available in Disk Utility. These options are not needed for an SSD drive because a standard erase makes it difficult to recover data from an SSD.

In fact, some folks will tell you not to zero out the data on an SSD, since it can cause wear and tear on the memory cells that, over time, can affect its reliability. I don’t think that’s nearly as big an issue as it used to be — SSD reliability and longevity has improved.

If “Standard Erase” doesn’t quite make you feel comfortable that your data can’t be recovered, there are a couple of options.

FileVault Keeps Your Data Safe

One way to make sure that your SSD’s data remains secure is to use FileVault. FileVault is whole-disk encryption for the Mac. With FileVault engaged, you need a password to access the information on your hard drive. Without it, that data is encrypted.

There’s one potential downside of FileVault — if you lose your password or the encryption key, you’re screwed: You’re not getting your data back any time soon. Based on my experience working at a Mac repair shop, losing a FileVault key happens more frequently than it should.

When you first set up a new Mac, you’re given the option of turning FileVault on. If you don’t do it then, you can turn on FileVault at any time by clicking on your Mac’s System Preferences, clicking on Security & Privacy, and clicking on the FileVault tab. Be warned, however, that the initial encryption process can take hours, as will decryption if you ever need to turn FileVault off.

With FileVault turned on, you can restart your Mac into its Recovery System (by restarting the Mac while holding down the command and R keys) and erase the hard drive using Disk Utility, once you’ve unlocked it (by selecting the disk, clicking the File menu, and clicking Unlock). That deletes the FileVault key, which means any data on the drive is useless.

FileVault doesn’t impact the performance of most modern Macs, though I’d suggest only using it if your Mac has an SSD, not a conventional hard disk drive.

Securely Erasing Free Space on Your SSD

If you don’t want to take Apple’s word for it, if you’re not using FileVault, or if you just want to, there is a way to securely erase free space on your SSD. It’s a little more involved but it works.

Before we get into the nitty-gritty, let me state for the record that this really isn’t necessary to do, which is why Apple’s made it so hard to do. But if you’re set on it, you’ll need to use Apple’s Terminal app. Terminal provides you with command line interface access to the OS X operating system. Terminal lives in the Utilities folder, but you can access Terminal from the Mac’s Recovery System, as well. Once your Mac has booted into the Recovery partition, click the Utilities menu and select Terminal to launch it.

From a Terminal command line, type:

diskutil secureErase freespace VALUE /Volumes/DRIVE

That tells your Mac to securely erase the free space on your SSD. You’ll need to change VALUE to a number between 0 and 4. 0 is a single-pass run of zeroes; 1 is a single-pass run of random numbers; 2 is a 7-pass erase; 3 is a 35-pass erase; and 4 is a 3-pass erase. DRIVE should be changed to the name of your hard drive. To run a 7-pass erase of your SSD drive in “JohnB-Macbook”, you would enter the following:

diskutil secureErase freespace 2 /Volumes/JohnB-Macbook

And remember, if you used a space in the name of your Mac’s hard drive, you need to insert a leading backslash before the space. For example, to run a 35-pass erase on a hard drive called “Macintosh HD” you enter the following:

diskutil secureErase freespace 3 /Volumes/Macintosh\ HD

Something to remember is that the more extensive the erase procedure, the longer it will take.

When Erasing is Not Enough — How to Destroy a Drive

If you absolutely, positively need to be sure that all the data on a drive is irretrievable, see this Scientific American article (with contributions by Gleb Budman, Backblaze CEO), How to Destroy a Hard Drive — Permanently.

The post Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall

Stretch for PCs and Macs, and a Raspbian update

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/stretch-pcs-macs-raspbian-update/

Today, we are launching the first Debian Stretch release of the Raspberry Pi Desktop for PCs and Macs, and we’re also releasing the latest version of Raspbian Stretch for your Pi.

Raspberry Pi Desktop Stretch splash screen

For PCs and Macs

When we released our custom desktop environment on Debian for PCs and Macs last year, we were slightly taken aback by how popular it turned out to be. We really only created it as a result of one of those “Wouldn’t it be cool if…” conversations we sometimes have in the office, so we were delighted by the Pi community’s reaction.

Seeing how keen people were on the x86 version, we decided that we were going to try to keep releasing it alongside Raspbian, with the ultimate aim being to make simultaneous releases of both. This proved to be tricky, particularly with the move from the Jessie version of Debian to the Stretch version this year. However, we have now finished the job of porting all the custom code in Raspbian Stretch to Debian, and so the first Debian Stretch release of the Raspberry Pi Desktop for your PC or Mac is available from today.

The new Stretch releases

As with the Jessie release, you can either run this as a live image from a DVD, USB stick, or SD card or install it as the native operating system on the hard drive of an old laptop or desktop computer. Please note that installing this software will erase anything else on the hard drive — do not install this over a machine running Windows or macOS that you still need to use for its original purpose! It is, however, safe to boot a live image on such a machine, since your hard drive will not be touched by this.

We’re also pleased to announce that we are releasing the latest version of Raspbian Stretch for your Pi today. The Pi and PC versions are largely identical: as before, there are a few applications (such as Mathematica) which are exclusive to the Pi, but the user interface, desktop, and most applications will be exactly the same.

For Raspbian, this new release is mostly bug fixes and tweaks over the previous Stretch release, but there are one or two changes you might notice.

File manager

The file manager included as part of the LXDE desktop (on which our desktop is based) is a program called PCManFM, and it’s very feature-rich; there’s not much you can’t do in it. However, having used it for a few years, we felt that it was perhaps more complex than it needed to be — the sheer number of menu options and choices made some common operations more awkward than they needed to be. So to try to make file management easier, we have implemented a cut-down mode for the file manager.

Raspberry Pi Desktop Stretch - file manager

Most of the changes are to do with the menus. We’ve removed a lot of options that most people are unlikely to change, and moved some other options into the Preferences screen rather than the menus. The two most common settings people tend to change — how icons are displayed and sorted — are now options on the toolbar and in a top-level menu rather than hidden away in submenus.

The sidebar now only shows a single hierarchical view of the file system, and we’ve tidied the toolbar and updated the icons to make them match our house style. We’ve removed the option for a tabbed interface, and we’ve stomped a few bugs as well.

One final change was to make it possible to rename a file just by clicking on its icon to highlight it, and then clicking on its name. This is the way renaming works on both Windows and macOS, and it’s always seemed slightly awkward that Unix desktop environments tend not to support it.

As with most of the other changes we’ve made to the desktop over the last few years, the intention is to make it simpler to use, and to ease the transition from non-Unix environments. But if you really don’t like what we’ve done and long for the old file manager, just untick the box for Display simplified user interface and menus in the Layout page of Preferences, and everything will be back the way it was!

Raspberry Pi Desktop Stretch - preferences GUI

Battery indicator for laptops

One important feature missing from the previous release was an indication of the amount of battery life. Eben runs our desktop on his Mac, and he was becoming slightly irritated by having to keep rebooting into macOS just to check whether his battery was about to die — so fixing this was a priority!

We’ve added a battery status icon to the taskbar; this shows current percentage charge, along with whether the battery is charging, discharging, or connected to the mains. When you hover over the icon with the mouse pointer, a tooltip with more details appears, including the time remaining if the battery can provide this information.

Raspberry Pi Desktop Stretch - battery indicator

While this battery monitor is mainly intended for the PC version, it also supports the first-generation pi-top — to see it, you’ll only need to make sure that I2C is enabled in Configuration. A future release will support the new second-generation pi-top.

New PC applications

We have included a couple of new applications in the PC version. One is called PiServer — this allows you to set up an operating system, such as Raspbian, on the PC which can then be shared by a number of Pi clients networked to it. It is intended to make it easy for classrooms to have multiple Pis all running exactly the same software, and for the teacher to have control over how the software is installed and used. PiServer is quite a clever piece of software, and it’ll be covered in more detail in another blog post in December.

We’ve also added an application which allows you to easily use the GPIO pins of a Pi Zero connected via USB to a PC in applications using Scratch or Python. This makes it possible to run the same physical computing projects on the PC as you do on a Pi! Again, we’ll tell you more in a separate blog post this month.

Both of these applications are included as standard on the PC image, but not on the Raspbian image. You can run them on a Pi if you want — both can be installed from apt.

How to get the new versions

New images for both Raspbian and Debian versions are available from the Downloads page.

It is possible to update existing installations of both Raspbian and Debian versions. For Raspbian, this is easy: just open a terminal window and enter

sudo apt-get update
sudo apt-get dist-upgrade

Updating Raspbian on your Raspberry Pi

How to update to the latest version of Raspbian on your Raspberry Pi. Download Raspbian here: More information on the latest version of Raspbian: Buy a Raspberry Pi:

It is slightly more complex for the PC version, as the previous release was based around Debian Jessie. You will need to edit the files /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list, using sudo to do so. In both files, change every occurrence of the word “jessie” to “stretch”. When that’s done, do the following:

sudo apt-get update 
sudo dpkg --force-depends -r libwebkitgtk-3.0-common
sudo apt-get -f install
sudo apt-get dist-upgrade
sudo apt-get install python3-thonny
sudo apt-get install sonic-pi=2.10.0~repack-rpt1+2
sudo apt-get install piserver
sudo apt-get install usbbootgui

At several points during the upgrade process, you will be asked if you want to keep the current version of a configuration file or to install the package maintainer’s version. In every case, keep the existing version, which is the default option. The update may take an hour or so, depending on your network connection.

As with all software updates, there is the possibility that something may go wrong during the process, which could lead to your operating system becoming corrupted. Therefore, we always recommend making a backup first.

Enjoy the new versions, and do let us know any feedback you have in the comments or on the forums!

The post Stretch for PCs and Macs, and a Raspbian update appeared first on Raspberry Pi.

New – VPC Endpoints for DynamoDB

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-vpc-endpoints-for-dynamodb/

Starting today Amazon Virtual Private Cloud (VPC) Endpoints for Amazon DynamoDB are available in all public AWS regions. You can provision an endpoint right away using the AWS Management Console or the AWS Command Line Interface (CLI). There are no additional costs for a VPC Endpoint for DynamoDB.

Many AWS customers run their applications within a Amazon Virtual Private Cloud (VPC) for security or isolation reasons. Previously, if you wanted your EC2 instances in your VPC to be able to access DynamoDB, you had two options. You could use an Internet Gateway (with a NAT Gateway or assigning your instances public IPs) or you could route all of your traffic to your local infrastructure via VPN or AWS Direct Connect and then back to DynamoDB. Both of these solutions had security and throughput implications and it could be difficult to configure NACLs or security groups to restrict access to just DynamoDB. Here is a picture of the old infrastructure.

Creating an Endpoint

Let’s create a VPC Endpoint for DynamoDB. We can make sure our region supports the endpoint with the DescribeVpcEndpointServices API call.


aws ec2 describe-vpc-endpoint-services --region us-east-1
{
    "ServiceNames": [
        "com.amazonaws.us-east-1.dynamodb",
        "com.amazonaws.us-east-1.s3"
    ]
}

Great, so I know my region supports these endpoints and I know what my regional endpoint is. I can grab one of my VPCs and provision an endpoint with a quick call to the CLI or through the console. Let me show you how to use the console.

First I’ll navigate to the VPC console and select “Endpoints” in the sidebar. From there I’ll click “Create Endpoint” which brings me to this handy console.

You’ll notice the AWS Identity and Access Management (IAM) policy section for the endpoint. This supports all of the fine grained access control that DynamoDB supports in regular IAM policies and you can restrict access based on IAM policy conditions.

For now I’ll give full access to my instances within this VPC and click “Next Step”.

This brings me to a list of route tables in my VPC and asks me which of these route tables I want to assign my endpoint to. I’ll select one of them and click “Create Endpoint”.

Keep in mind the note of warning in the console: if you have source restrictions to DynamoDB based on public IP addresses the source IP of your instances accessing DynamoDB will now be their private IP addresses.

After adding the VPC Endpoint for DynamoDB to our VPC our infrastructure looks like this.

That’s it folks! It’s that easy. It’s provided at no cost. Go ahead and start using it today. If you need more details you can read the docs here.

Weekly roundup: Potluck jam

Post Syndicated from Eevee original https://eev.ee/dev/2017/05/09/weekly-roundup-potluck-jam/

Oh hey I did a bunch of stuff.

  • patreon: I wrote the usual monthly update and posted my progress so far on my book for $4 patrons. Also I meticulously tagged all my old posts for some reason, so I guess now you can find photos of Pearl if you want to?

  • blog: I finally fixed the archives page to not be completely flooded by roundups, and added a link to it, which has been missing since the last theme refresh ages ago. Also updated the sidebar to match the front page.

  • flora: I hastily cobbled together a species sheet VN, the first new one in a year or so.

  • music: I made a song. It’s okay. It feels incomplete, probably because I got tired of working on it partway through and gave it a hasty ending. Maybe I’ll iterate on it sometime. I also started mucking around with an attempt at contributing to the Flora glitch album, but I have absolutely no idea what I’m doing.

  • potluck: I came up with a weird idea: ask the general public for sprites, then make a game out of whatever I get. I made a tiny Flask thing for submitting tiles and the results so far are… weird. Nine days to go until I close tile submissions, if you want to give this a shot.

  • dev: I fixed some mGBA bugs with multiple gamepads, woohoo.

  • art: I did some doodling that wasn’t too bad. Mostly I started trying to flesh out the fox flux tileset, which is proving more difficult than expected.

I have some Patreon obligations I oughta get to sooner rather than later; I should probably get a skeleton engine ready for the potluck game, whatever it ends up being; and I’d like to get some more book work done. But at the moment I’m mostly practicing art and trying to make tiles, since the art is going to be the major blocker for expanding fox flux into a somewhat bigger game.

New- Introducing AWS CodeStar – Quickly Develop, Build, and Deploy Applications on AWS

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/new-aws-codestar/

It wasn’t too long ago that I was on a development team working toward completing a software project by a release deadline and facing the challenges most software teams face today in developing applications. Challenges such as new project environment setup, team member collaboration, and the day-to-day task of keeping track of the moving pieces of code, configuration, and libraries for each development build. Today, with companies’ need to innovate and get to market faster, it has become essential to make it easier and more efficient for development teams to create, build, and deploy software.

Unfortunately, many organizations face some key challenges in their quest for a more agile, dynamic software development process. The first challenge most new software projects face is the lengthy setup process that developers have to complete before they can start coding. This process may include setting up of IDEs, getting access to the appropriate code repositories, and/or identifying infrastructure needed for builds, tests, and production.

Collaboration is another challenge that most development teams may face. In order to provide a secure environment for all members of the project, teams have to frequently set up separate projects and tools for various team roles and needs. In addition, providing information to all stakeholders about updates on assignments, the progression of development, and reporting software issues can be time-consuming.

Finally, most companies desire to increase the speed of their software development and reduce the time to market by adopting best practices around continuous integration and continuous delivery. Implementing these agile development strategies may require companies to spend time in educating teams on methodologies and setting up resources for these new processes.

Now Presenting: AWS CodeStar

To help development teams ease the challenges of building software while helping to increase the pace of releasing applications and solutions, I am excited to introduce AWS CodeStar.

AWS CodeStar is a cloud service designed to make it easier to develop, build, and deploy applications on AWS by simplifying the setup of your entire development project. AWS CodeStar includes project templates for common development platforms to enable provisioning of projects and resources for coding, building, testing, deploying, and running your software project.

The key benefits of the AWS CodeStar service are:

  • Easily create new projects using templates for Amazon EC2, AWS Elastic Beanstalk, or AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. By selecting a template, the service will provision the underlying AWS services needed for your project and application.
  • Unified experience for access and security policies management for your entire software team. Projects are automatically configured with appropriate IAM access policies to ensure a secure application environment.
  • Pre-configured project management dashboard for tracking various activities, such as code commits, build results, deployment activity and more.
  • Running sample code to help you get up and running quickly enabling you to use your favorite IDEs, like Visual Studio, Eclipse, or any code editor that supports Git.
  • Automated configuration of a continuous delivery pipeline for each project using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.
  • Integration with Atlassian JIRA Software for issue management and tracking directly from the AWS CodeStar console

With AWS CodeStar, development teams can build an agile software development workflow that now only increases the speed in which teams and deploy software and bug fixes, but also enables developers to build software that is more inline with customers’ requests and needs.

An example of a responsive development workflow using AWS CodeStar is shown below:

Journey Into AWS CodeStar

Now that you know a little more about the AWS CodeStar service, let’s jump into using the service to set up a web application project. First, I’ll go to into the AWS CodeStar console and click the Start a project button.

If you have not setup the appropriate IAM permissions, AWS CodeStar will show a dialog box requesting permission to administer AWS resources on your behalf. I will click the Yes, grant permissions button to grant AWS CodeStar the appropriate permissions to other AWS resources.

However, I received a warning that I do not have administrative permissions to AWS CodeStar as I have not applied the correct policies to my IAM user. If you want to create projects in AWS CodeStar, you must apply the AWSCodeStarFullAccess managed policy to your IAM user or have an IAM administrative user with full permissions for all AWS services.

Now that I have added the aforementioned permissions in IAM, I can now use the service to create a project. To start, I simply click on the Create a new project button and I am taken to the hub of the AWS CodeStar service.

At this point, I am presented with over twenty different AWS CodeStar project templates to choose from in order to provision various environments for my software development needs. Each project template specifies the AWS Service used to deploy the project, the supported programming language, and a description of the type of development solution implemented. AWS CodeStar currently supports the following AWS Services: Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. Using preconfigured AWS CloudFormation templates, these project templates can create software development projects like microservices, Alexa skills, web applications, and more with a simple click of a button.

For my first AWS CodeStar project, I am going to build a serverless web application using Node.js and AWS Lambda using the Node.js/AWS Lambda project template.

You will notice for this template AWS CodeStar sets up all of the tools and services you need for a development project including an AWS CodePipeline connected with the services; AWS CodeBuild, AWS CloudFormation, and Amazon CloudWatch. I’ll name my new AWS CodeStar project, TaraWebProject, and click Create Project.

Since this is my first time creating an AWS CodeStar, I will see a dialog that asks about the setup of my AWS CodeStar user settings. I’ll type Tara in the textbox for the Display Name and add my email address in the Email textbox. This information is how I’ll appear to others in the project.

The next step is to select how I want to edit my project code. I have decided to edit my TaraWebProject project code using the Visual Studio IDE. With Visual Studio, it will be essential for me to configure it to use the AWS Toolkit for Visual Studio 2015 to access AWS resources while editing my project code. On this screen, I am also presented with the link to the AWS CodeCommit Git repository that AWS CodeStar configured for my project.

The provisioning and tool setup for my software development project is now complete. I’m presented with the AWS CodeStar dashboard for my software project, TaraWebProject, which allows me to manage the resources for the project. This includes the management of resources, such as code commits, team membership and wiki, continuous delivery pipeline, Jira issue tracking, project status and other applicable project resources.

What is really cool about AWS CodeStar for me is that it provides a working sample project from which I can start the development of my serverless web application. To view the sample of my new web application, I will go to the Application endpoints section of the dashboard and click the link provided.

A new browser window will open and will display the sample web application AWS CodeStar generated to help jumpstart my development. A cool feature of the sample application is that the background of the sample app changes colors based on the time of day.

Let’s now take a look at the code used to build the sample website. In order to view the code, I will back to my TaraWebProject dashboard in the AWS CodeStar console and select the Code option from the sidebar menu.

This takes me to the tarawebproject Git repository in the AWS CodeCommit console. From here, I can manually view the code for my web application, the commits made in the repo, the comparison of commits or branches, as well as, create triggers in response to my repo events.

This provides a great start for me to start developing my AWS hosted web application. Since I opted to integrate AWS CodeStar with Visual Studio, I can update my web application by using the IDE to make code changes that will be automatically included in the TaraWebProject every time I commit to the provisioned code repository.

You will notice that on the AWS CodeStar TaraWebProject dashboard, there is a message about connecting the tools to my project repository in order to work on the code. Even though I have already selected Visual Studio as my IDE of choice, let’s click on the Connect Tools button to review the steps to connecting to this IDE.

Again, I will see a screen that will allow me to choose which IDE: Visual Studio, Eclipse, or Command Line tool that I wish to use to edit my project code. It is important for me to note that I have the option to change my IDE choice at any time while working on my development project. Additionally, I can connect to my Git AWS CodeCommit repo via HTTPS and SSH. To retrieve the appropriate repository URL for each protocol, I only need to select the Code repository URL dropdown and select HTTPS or SSH and copy the resulting URL from the text field.

After selecting Visual Studio, CodeStar takes me to the steps needed in order to integrate with Visual Studio. This includes downloading the AWS Toolkit for Visual Studio, connecting the Team Explorer to AWS CodeStar via AWS CodeCommit, as well as, how to push changes to the repo.

After successfully connecting Visual Studio to my AWS CodeStar project, I return to the AWS CodeStar TaraWebProject dashboard to start managing the team members working on the web application with me. First, I will select the Setup your team tile so that I can go to the Project Team page.

On my TaraWebProject Project Team page, I’ll add a team member, Jeff, by selecting the Add team member button and clicking on the Select user dropdown. Team members must be IAM users in my account, so I’ll click on the Create new IAM user link to create an IAM accounts for Jeff.

When the Create IAM user dialog box comes up, I will enter an IAM user name, Display name, and Email Address for the team member, in this case, Jeff Barr. There are three types of project roles that Jeff can be granted, Owner, Contributor, or Viewer. For the TaraWebProject application, I will grant him the Contributor project role and allow him to have remote access by select the Remote access checkbox. Now I will create Jeff’s IAM user account by clicking the Create button.

This brings me to the IAM console to confirm the creation of the new IAM user. After reviewing the IAM user information and the permissions granted, I will click the Create user button to complete the creation of Jeff’s IAM user account for TaraWebProject.

After successfully creating Jeff’s account, it is important that I either send Jeff’s login credentials to him in email or download the credentials .csv file, as I will not be able to retrieve these credentials again. I would need to generate new credentials for Jeff if I leave this page without obtaining his current login credentials. Clicking the Close button returns me to the AWS CodeStar console.

Now I can complete adding Jeff as a team member in the TaraWebProject by selecting the JeffBarr-WebDev IAM role and clicking the Add button.

I’ve successfully added Jeff as a team member to my AWS CodeStar project, TaraWebProject enabling team collaboration in building the web application.

Another thing that I really enjoy about using the AWS CodeStar service is I can monitor all of my project activity right from my TaraWebProject dashboard. I can see the application activity, any recent code commits, and track the status of any project actions, such as the results of my build, any code changes, and the deployments from in one comprehensive dashboard. AWS CodeStar ties the dashboard into Amazon CloudWatch with the Application activity section, provides data about the build and deployment status in the Continuous Deployment section with AWS CodePipeline, and shows the latest Git code commit with AWS CodeCommit in the Commit history section.

Summary

In my journey of the AWS CodeStar service, I created a serverless web application that provisioned my entire development toolchain for coding, building, testing, and deployment for my TaraWebProject software project using AWS services. Amazingly, I have yet to scratch the surface of the benefits of using AWS CodeStar to manage day-to-day software development activities involved in releasing applications.

AWS CodeStar makes it easy for you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. AWS CodeStar allows you to choose from various templates to setting up projects using AWS Lambda, Amazon EC2, or AWS Elastic Beanstalk. It comes pre-configured with a project management dashboard, an automated continuous delivery pipeline, and a Git code repository using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy allowing developers to implement modern agile software development best practices. Each AWS CodeStar project gives developers a head start in development by providing working code samples that can be used with popular IDEs that support Git. Additionally, AWS CodeStar provides out of the box integration with Atlassian JIRA Software providing a project management and issue tracking system for your software team directly from the AWS CodeStar console.

You can get started using the AWS CodeStar service for developing new software projects on AWS today. Learn more by reviewing the AWS CodeStar product page and the AWS CodeStar user guide documentation.

Tara

B2 for Beginners: Inside the B2 Web interface

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/b2-for-beginners-inside-the-b2-web-interface/

B2 for Beginners

B2 for Beginners: Inside the B2 Web interface

B2 Cloud Storage enables you to store data in the cloud at a fraction of what you’ll pay other services. For instance, we’re one-fourth of the price of Amazon’s S3. We’ve made it easy to access thanks to a web interface, API and command line interface. Let’s get to know the web interface a bit better, because it’s the easiest way to get around B2 and it’s a good way to get a handle on the fundamentals of B2 use.

Anyone with a Backblaze account can set up B2 access by visiting My Settings. Look for Enabled Products and check B2 Cloud Storage.

B2 is accessed the same way as your Backblaze Computer Backup. The sidebar on the left side of your My Account window shows you all the Backblaze services you use, including B2. Let’s go through the individual links under B2 Cloud Storage to get a sense of what they are and what they do.

Buckets

Data in B2 is stored in buckets. Think of a bucket as a top-level folder or directory. You can create as many buckets as you want. What’s more, you can put in as many files as you want. Buckets can contain files of any type or size.

Buckets

Third-party applications and services can integrate with B2, and many already do. The Buckets screen is where you can get your Account ID information and create an application key – a unique identifier your apps will use to securely connect to B2. If you’re using a third-party app that needs access to your bucket, such as a NAS backup app or a file sync tool, this is where you’ll find the info you need to connect. (We’ll have more info about how to backup your NAS to B2 very soon!)

The Buckets window lists the buckets you’ve created and provides basic information including creation date, ID, public or private type, lifecycle information, number of files, size and snapshots.

Click the Bucket Settings link to adjust each bucket’s individual settings. You can specify if files in the bucket are public or private. Private files can’t be shared, while public ones can be.

You can also tag your bucket with customized information encoded in JSON format. Custom info can contain letters, numbers, “-” and “_”.

Browse Files

Click the Upload/Download button to see a directory of each bucket. Alternately, click the Browse Files link on the left side of the B2 interface.

Browse Files

You can create a new subdirectory by clicking the New Folder button, or begin to upload files by clicking the Upload button. You can drag and drop files you’d like to upload and Backblaze will handle that for you. Alternately, clicking on the dialog box that appears will enable you to select the files on your computer you’d like to upload.

Info Button

Next to each individual file is an information button. Click it for details about the file, including name, location, kind, size and other details. You’ll also see a “Friendly URL” link. If the bucket is public and you’d like to share this file with others, you may copy that Friendly URL and paste it into an email or message to let people know where to find it.

Download

You can download the contents of your buckets by clicking the checkbox next to the filename and clicking the Download button. You can also delete files and create snapshots. Snapshots are helpful if you want to preserve copies of your files in their present state for some future download or recovery. You can also create a snapshot of the full bucket. If you have a large snapshot, you can order it as a hard drive instead of downloading it. We’ll get more into snapshots in a future blog post.

Lifecycle Settings

We recently introduced Lifecycle Settings to keep your buckets from getting cluttered with too many versions of files. Our web interface lets you manage these settings for each individual bucket.

Lifecycle Rules

By default, the bucket’s lifecycle setting is to keep all versions of files you upload. The web interface lets you adjust that so B2 only keeps the last file version, keeps the last file for a specific number of days, or keeps files based on your own custom rule. You can determine the file path, the number of days until the file is hidden, and the number of days until the file is deleted.

Lifecycle Rules

Reports

Backblaze updates your account daily with details on what’s happening with your B2 files. These reports are accessible through the B2 interface under the Reports tab. Clicking on reports will reveal an easy to understand visual charge showing you the average number of GB stored, total GB downloaded and total number of transactions for the month.

Reports

Look further down the page for a breakdown of monthly transactions by type, along with charts that help you track average GB stored, GB downloaded and count of average stored files for the month.

Caps and Alerts

One of our goals with B2 was to take the surprise out cloud storage fees. The B2 web GUI sports a Caps & Alerts section to help you control how much you spend on B2.

Caps & Alerts

This is where you can see – and limit – daily storage caps, daily downloads, and daily transactions. “Transactions” are interactions with your account like creating a new bucket, listing the contents of a bucket, or downloading a file.

You can make sure to send those alerts to your cell phone and email, so you’ll never be hit with an unwelcome surprise in the form of an unexpected bill. The first 10 GB of storage is free, with unlimited free uploads and 1 GB of free downloads each day.

Edit Caps

Click the Edit Caps button to enter dollar amount limits for storage, download bandwidth, Class B and Class C transactions separately (or specify No Cap if you don’t want to be encumbered). This way, you maintain control over how much you spend with B2.

And There’s More

That’s an overview of the B2 web GUI to help you get started using B2 Cloud Storage. If you’re more technical and are interested in connecting to B2 using our API instead, make sure to check out our B2 Starter Guide for a comprehensive overview of what’s under the hood.

Still have questions about the B2 web GUI, or ideas for how we can make it better? Fire away in the comments, we want to hear from you!

The post B2 for Beginners: Inside the B2 Web interface appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Firefox 47

Post Syndicated from ris original http://lwn.net/Articles/690154/rss

Firefox 47 has been released. This version enables the VP9 video codec for
users with fast machines, plays embedded YouTube videos with HTML5 video if
Flash is not installed, and more. There is a blog
post
about these and other improvements. “Now, we are making it
even easier to access synced tabs directly in your desktop Firefox
browser. If you’re logged into your Firefox Account, you will see all open
tabs from your smartphone or other computers within the sidebar. In the
sidebar you can also search for specific tabs quickly and easily.

See the release
notes
for more information.

Twitter’s missing manual

Post Syndicated from Eevee original https://eev.ee/blog/2016/02/20/twitters-missing-manual/

I mentioned recently, buried in a post about UI changes, that Twitter’s latest earnings report included this bombshell:

We are going to fix the broken windows and confusing parts, like the [email protected] syntax and @reply rules, that we know inhibit usage and drive people away

There’s an interesting problem here. UI is hard. You can’t just slap a button on the screen for every feature that could conceivably be used at any given time. Some features are only of interest to so-called “power users”, so they’re left subtle, spread by word-of-mouth. Some features you try to make invisible and heuristic. Some features are added just to solve one influential user’s problem. Some features are, ah, accidental.

A sufficiently mature, popular, and interesting product thus tends to accumulate a small pile of hidden features, sometimes not documented or even officially acknowledged. I’d say this is actually a good thing! Using something for a while should absolutely reward you with a new trick every so often — that below-the-surface knowledge makes you feel involved with the thing you’re using and makes it feel deeper overall.

The hard part is striking a balance. On one end of the spectrum you have tools like Notepad, where the only easter egg is that pressing F5 inserts the current time. On the other end you have tools like vim, which consist exclusively of easter eggs.

One of Twitter’s problems is that it’s tilted a little too far towards the vim end of the scale. It looks like a dead-simple service, but those humble 140 characters have been crammed full of features over the years, and the ways they interact aren’t always obvious. There are rules, and the rules generally make sense once you know them, but it’s also really easy to overlook them.

Here, then, is a list of all the non-obvious things about Twitter that I know. Consider it both a reference for people who aren’t up to their eyeballs in Twitter, and an example of how these hidden features can pile up. I’m also throwing in a couple notes on etiquette, because I think that’s strongly informed by the shape of the platform.

Text

  • Tweets are limited to 140 Unicode characters, meaning that even astral plane characters (such as emoji) only count as one.

  • Leading and trailing whitespace is stripped from tweets.

  • Tweets may contain newlines, and there doesn’t seem to be any limit to how many.

  • In the middle of a tweet, strings of whitespace (e.g. multiple spaces) are preserved. However, more than two consecutive newlines will be reduced to only two.

  • Anything remotely resembling a link will be mangled into some http://t.co/asdf link-shortened garbage. In some cases, such as when talking about a domain name, this can make the tweet longer. You can defeat this by sticking an invisible character, such as U+200D ZERO WIDTH JOINER, around the final dot so it no longer looks like a domain name.

    In official clients, links are shown unmangled, but without the protocol and truncated to about 20 characters. The link to this article, for example, shows as eev.ee/blog/2016/02/2…. However, at least on Web Twitter, copy-pasting preserves the link in full, including protocol.

    Note that Twitter’s knowledge of domains is not exhaustive — it will link “google.com” but not “eev.ee”.

  • For the sake of its SMS-based roots, Twitter supports performing several commands by typing them in a tweet. In particular, if you start a tweet with the word d or m or dm, the second word will be treated as a username, and the rest of the tweet will be DM’d to that user.

  • Accounts managed by multiple people, such as support accounts or politicians’ accounts, sometimes sign tweets with a ^ followed by the author’s initials. This has no special significance to Twitter.

  • You cannot use astral plane characters (which includes most emoji) in your display name or bio; they will be silently stripped. However, you can use anything from the Miscellaneous Symbols or Dingbats blocks, and many of these characters are rendered with color glyphs on Web Twitter. Results may vary on phones and in other clients.

Replies and mentions

A tweet can “mention” other users, which just means including their @handle somewhere in the tweet. This will notify every mentioned user of the tweet.

You can reply to tweets, which threads them together. A tweet can only have one parent (or no parent), but any number of replies. Everything on Twitter is thus arranged into a number of trees, where the root of the tree is a new tweet not responding to anything, and replies branch out from there.

  • A tweet that begins with a mention — that is, the very first character is @ and it’s immediately followed by an extant username — won’t appear on your profile on Web Twitter. It’ll still appear on the “with replies” page. It’ll also appear on your profile on Android Twitter, which doesn’t separate replies from not.

  • A “mention” can only be an existing username. If “foo” is not the name of a user, then @foo is not a mention, and none of the rules for mentions apply.

  • A tweet that begins with a mention won’t appear on the timelines of anyone who follows you, unless they also follow the first person you mention. That is, if you tweet @foo @bar heya, it’ll only appear on the timelines of people who follow both you and @foo.

  • If you put some other character before the first @, the previous rule no longer applies, and your tweet will appear to all your followers. So [email protected] @bar heya will be visible to everyone (and show on your Web profile). This is called “dot-replying”. The dot isn’t actually special; it’s just an easy-to-type and unobtrusive character. I like to use or . Some people prefer to put the mentions at the end instead, producing heya @foo @bar.

    Dot-replying in the middle of a tweet is not strictly necessary, but sometimes it’s useful for disambiguation. If you’re replying to @foo, and want to say something about @bar, it would come out as @foo @bar is pretty great, which is a little difficult to read. Adding a dot to make @foo [email protected] doesn’t do anything as far as Twitter is concerned, but can make it clear that @bar is the subject of a sentence rather than a person being talked to.

  • One last visibility wrinkle: if a tweet appears in your timeline because it begins with a mention of someone you follow, the tweet it replies to will also appear, even if it wouldn’t have on its own.

    Consider three users: A, B, and C. You follow B and C, but not A. User A makes a tweet, and B replies to it (@A cool!). Neither tweet will appear on your timeline — A’s won’t because you don’t follow A, and B’s won’t because it begins with a mention of someone you don’t follow. Then C replies to B’s tweet, which puts the mention of B first (see below). Both B’s and C’s tweets will now appear on your timeline — C’s appears because it begins with a mention of someone you do follow, and B’s appears for context.

    In other words, even if a tweet doesn’t appear on your timeline initially, it may show up later due to the actions of a third party.

  • A reply must, somewhere, mention the author of the tweet it’s replying to. If you reply to a tweet and delete the author’s @handle, it’ll become a new top-level tweet rather than a reply. You can see this in some clients, like Android Twitter: there’s “replying to (display name)” text indicating it’s a reply, and that text disappears if you delete the @handle.

  • There is one exception to the previous rule: if you’re replying to yourself, you don’t have to include your own @handle, even though clients include it by default. So if you want to say something that spans multiple tweets, you can just keep replying to yourself and deleting the @handle. (This is sometimes called a “tweetstorm”.)

    It’s a really good idea to do this whenever you’re making multiple tweets about something. Otherwise, someone who stumbles upon one of the tweets later will have no idea what the context was, and won’t be able to find it without scrolling back however long on your profile.

    If you reply to yourself but leave your @handle at the beginning, the first tweet will appear on your profile, but the others won’t, because they start with a mention. On the other hand, letting the entire tweetstorm appear on your profile can be slightly confusing — the individual tweets will appear in reverse order, because tweets don’t appear threaded on profiles. On Web Twitter, at least, the followups will have a “view conversation” link that hints that they’re replies.

    Either way, the replies will still appear on your followers’ timelines. Even if the replies begin with your @handle, they still begin with a mention of someone your followers all follow: you!

    I’m told that many third-party clients don’t support replying to yourself without your handle included, and the API documentation doesn’t mention that it’s a feature. But I’m also told that only first-party clients require you to mention someone in a reply in order to thread, and that third-party clients will merrily thread anything to anything. (I remember when Web Twitter allowed that, so I totally believe the API still does.) If you don’t use the official clients, I guess give it a whirl and see what happens.

  • The previous rule also applies when making longer replies to someone else. Reply to them once, then reply to yourself with the next tweet (and remove your own @handle). You’ll end up with three tweets all threaded together.

    This is even more important, because Twitter shows the replies to a single tweet in a somewhat arbitrary order, bubbling “important” ones to the top. If you write a very long response and break it across three tweets, all replying to the same original tweet, they’ll probably show as an incoherent jumble to anyone reading the thread. If you make each tweet a reply to the previous one, they’re guaranteed to stay in order.

  • Replying to a tweet will also prefill the @handle of anyone mentioned in the tweet. Replying to a retweet will additionally prefill the @handle of the person who retweeted it. The original author’s @handle always appears first. In some cases, it’s polite to remove some of these; you only need the original author’s @handle to make a reply. (It’s not uncommon to accumulate multiple mentions, then end up in an extended conversation with only one other person, while constantly notifying several third parties. Or you may want to remove the @handle of a locked account that retweeted a public account, to protect their privacy.)

  • Prefilling @handles is done client-side, so some clients have slightly different behavior. In particular, I’ve seen a few people who reply to their own reply to someone else (in order to thread a longer thought), and end up with their own @handle at the beginning of the second reply! You probably don’t want that, because now the second reply begins with a mention of someone all of your followers follow — yourself! — and so both tweets will appear on your followers’ timelines.

  • In official clients (Web and Android, at least), long threads of tweets are collapsed on your timeline. Only the first tweet and the last two tweets are visible. If you have a lot to say about something, it’s a good idea to put the important bits in one of those three tweets where your followers will actually see them. This is another reason it’s polite to thread your tweets together — it saves people from having their timelines flooded by your tweetstorm.

    Sometimes, it’s possible to see multiple “branches” of the same conversation on your timeline. For example, if A makes a few tweets, and B and C both reply, and you follow all three of them, then you’ll see B’s replies and C’s replies separately. Clients don’t handle this particularly well and it can become a bit of a clusterfuck, with the same root tweet appearing multiple times.

  • Because official clients treat a thread as a single unit, you can effectively “bump” your own tweet by replying to it. Your reply is new, so it’ll appear on your followers’ timelines; but the client will also include the first tweet in the thread as context, regardless of its age.

  • When viewing a single tweet, official clients may not show the replies in chronological order. Usually the “best” replies are bumped to the top. “Best” is entirely determined by Twitter, but it seems to work fairly well.

    If you reply to yourself, your own replies will generally appear first, but this is not guaranteed. If you want to link to someone else’s long chain of tweets, it’s safest to link to the last tweet in the thread, since there can only be one unambiguous trail of parent tweets leading back to the beginning. This also saves readers from digging through terrible replies by third parties.

  • If reply to a tweet with @foo heya, and @foo later renames their account to @quux, the tweet will retain its threading even though it no longer mentions the author of the parent tweet. However, your reply will now appear on your profile, because it doesn’t begin with the handle of an existing user. Note that this means it’s fairly easy for a non-follower to figure out what you renamed your account to, by searching for replies to your old name.

  • Threads are preserved even if some of the tweets are hidden (either because you’ve blocked some participants, or because they have their accounts set to private). Those tweets won’t appear for you, but any visible replies to them will.

  • If a tweet in the middle of a thread is deleted (or the author’s account is deleted), the thread will break at that point. Replies to the deleted tweet won’t be visible when looking at the parent, and the parent won’t be visible when looking at the replies.

  • You can quote tweets by including a link to them in your tweet, which will cause the quoted tweet to appear in a small box below yours. This does not create a reply and will not be part of the quoted tweet’s thread. If you want to do that, you can’t use the retweet/quote button. You have to reply to the tweet, manually include a link to it, and be sure to mention the author.

  • When you quote a tweet, the author is notified; however, unlike a retweet, they won’t be notified when people like or retweet your quote (unless you also mention them). If you don’t want to notify the author, you can take a screenshot (though this doesn’t let them delete the tweet) or use a URL shortener (though this doesn’t let them obviously disable a quote by blocking you).

  • Due to the nature of Twitter, it’s common for a tweet to end up on many people’s timelines simultaneously and attract many similar replies within a short span of time. It’s polite to check the existing replies to a popular tweet, or a tweet from a popular person, before giving your two cents.

  • It’s generally considered rude to barge into the middle of a conversation between two other people, especially if they seem to know each other much better than you know them, and especially if you’re being antagonistic. There are myriad cases where this may be more or less appropriate, and no hard and fast rules. You’re a passerby overhearing two people talking on the street; act accordingly.

  • Someone unrecognized who replies to you — especially someone who doesn’t follow you, or who is replying to the middle of a conversation, or who is notably arrogant or obnoxious — is often referred to as a “rando”.

  • When you quote or publicly mention someone for the sake of criticizing them, be aware that you’re exposing them to all of your followers, some of whom may be eager for an argument. If you have a lot of followers, you might inadvertently invite a dogpile.

Hashtags

Hashtags are a # character followed by some number of non-whitespace characters. Anecdotally, they seem to be limited server-side to 100 characters, but I haven’t found any documentation on this.

  • Exactly which characters may appear in a hashtag is somewhat inconsistent, and has quietly changed at least once.

  • The only real point to hashtags is that you can click on them in clients to jump directly to search results. Note that searching for #foo will only find #foo, but searching for foo will find both foo and #foo.

  • Hashtags can appear in the “trending” widget, but so can any other regular text.

  • There is no reason to tag a bunch of random words in your tweets. No one is searching Twitter for #funny. Doing this makes you look like an out-of-touch marketer who’s trying much too hard.

  • People do sometimes use hashtags as “asides” or “moods”, but in this case the tag isn’t intended to be searched for, and the real point of using a hashtag is that the link color offsets it from the main text.

  • Twitter also supports “cashtags”, which are prefixed with a $ instead and are generally stock symbols. I only even know this because it makes shell and Perl code look goofy.

Media

A tweet may have one embedded attachment.

  • You may explicitly include a set of up to four images or a video or a poll. You cannot combine this within a single tweet. Brands™ have access to a handful of other embedded gizmos.

  • If you include images or a video, you will lose 24 characters of writing space, because a direct link to the images/video will be silently added to the end of your tweet. This is for the sake of text-only clients, e.g. people using Twitter over SMS, so they can see that there’s an attachment and possibly view it in a browser.

  • Including a poll will not append a link, but curiously, you’ll still lose 24 characters. It’s possible this is a client bug, but it happens in both Web and Android Twitter.

  • Alternative clients may not support new media types at first. In particular, people who used TweetDeck were frequently confused right after polls were launched, because TweetDeck showed only the tweet text and no indication that a poll had ever been there. Some third-party clients still don’t support polls. Consider mentioning when you’re using a new attachment type. Might I suggest prefixing your tweet with 📊?

  • If you don’t include an explicit attachment, Twitter will examine the links in your tweet, in reverse order. If you link to a tweet, that tweet will be quoted in yours. If you link to a website that supports Twitter “cards” (small brief descriptions of a site, possibly with images), that card will be attached. There can only be one attachment, so as soon as Twitter finds something it can use, it stops looking.

  • You can embed someone else‘s media in your own tweet by ending it with a link to the media URL — that is, the one that ends with /photo/1. This is different from a quoted tweet, and won’t notify the original tweeter.

  • Quoted tweets are always just tweets that include links to other tweets. Even if the tweet is deleted, an embed box will still appear, though it’ll be grayed out and say the tweet is unavailable.

    If the link is the last thing to appear in the tweet text, official clients will not show the link. This can be extremely confusing if you try to link to two tweets — the first one will be left as a regular link, and the second one will be replaced by a quoted tweet, so at a glance it looks like you linked to a tweet and it was also embedded. A workaround for this is just to add text after the final link, so it’s not the last thing in the tweet and thus isn’t hidden.

  • Twitter cards may be associated with a Twitter account. On Android Twitter (not Web Twitter!), replying to a tweet with a card will also include the @handle for the associated account. For example, replying to a tweet that links to a YouTube video will prefill @YouTube. This is pretty goofy, since YouTube itself didn’t make the video, and it causes replies to notify the person even though the original link doesn’t.

  • Uploaded media may be flagged as “sensitive”, which generally means “pornographic”. This will require viewers to click through a warning to see the media, unless they’re logged in and have their account set to skip the warning. Flagged media also won’t appear in the sidebar on profile pages for people who have the warning enabled.

  • The API supports marking individual tweets as containing sensitive media, but official clients do not — instead, there’s an account setting that applies to everything you upload from that point forward. Media may also be flagged by other users as sensitive. Twitter also has some sort of auto-detection for sensitive media, which I only know about because it sometimes thinks photos of my hairless cats are pornographic.

  • If your own tweets have “sensitive” media attached, you will have to click through the warning, even if you have the warning disabled. A Twitter employee tells me this is so you’re aware when your own tweets are flagged, but the message still tells you to disable the warning in account settings, so this is mostly just confusing.

    Curiously, if you see your own tweet via a retweet, the warning doesn’t appear.

Blocking and muting

  • A blocked user cannot view your profile. They can, of course, use a different account, or merely log out. This is entirely client-side, too, so it’s possible that some clients don’t even support this “feature”.

  • A blocked user cannot like or retweet your tweets.

  • A blocked user cannot follow you. If you block someone who’s already following you, they’ll be forced to immediately unfollow. Likewise, you cannot follow a blocked user.

  • A blocked user’s tweets won’t appear on your timeline, or in any thread. As of fairly recently, their tweets won’t appear in search results, either. However, if you view the profile of someone who’s retweeted a blocked user, you will still see that retweet.

  • A blocked user can see your tweets, if someone they follow retweets you.

  • A blocked user can mention or reply to you, though you won’t be notified either by the tweet itself or by any retweets/likes. However, if someone else replies to them, your @handle will be prefilled, and you’ll be notified. Also, other people viewing your tweets will still see their replies threaded.

  • A blocked user can link to your tweets — however, rather than an embedded quote, their tweet will have a gray “this tweet is unavailable” box attached. This effect is retroactive. However (I think?), if a quoted tweet can’t be shown, the link to the tweet is left visible, so people can still click it to view the tweet manually.

  • Muting has two different effects. If you mute someone you’re following, their tweets won’t appear in your timeline, but you’ll still get notifications from them. This can be useful if you set your phone to only buzz on notifications from people you follow. If you mute someone you’re not following, nothing they do will send you notifications. Either way, their tweets will still be visible in threads and search results.

  • Relatedly, if you follow someone who’s a little eager with the retweeting, you can turn off just their retweets. It’s in the menu on their profile.

  • It’s trivial to tell whether someone’s blocked you, since their profile will tell you. However, it’s impossible to know for sure if someone has muted you or is just manually ignoring you, since being muted doesn’t actually prevent you from doing anything.

  • You can block and mute someone at the same time, though this has no special effect. If you unblock them, they’ll just still be muted.

  • The API strips out tweets from blocked and muted users server-side for streaming requests (such as your timeline), but leaves it up to the client for other requests (such as viewing a single tweet). So it’s possible that a client will neglect to apply the usual rule of “you never see a blocked user’s tweets in threads”. In particular, I’ve heard several reports that this is the case in the official iOS Twitter (!).

  • Tweeting screenshots of “you have been blocked” is getting pretty old and we can probably stop doing it.

  • Almost all of Twitter’s advanced search options are exposed on the advanced search page. All of them are shorthand for using a prefix in your search query; for example, “from these accounts” just becomes something like from:username.

  • The one that isn’t listed there is filter:, which is only mentioned in the API documentation. It can appear as filter:safe, filter:media, filter:images, or filter:links. It’s possible there are other undocumented forms.

  • Search applies to unshortened links, so you can find links to a website just by searching for its URL. However, because Twitter displays links without a protocol (http://), you have to leave it off when searching. Be aware that people who mention your work without mentioning you might be saying unkind things about it.

    That said, I’ve also run into cases where searching for a partial URL doesn’t find tweets that I already know exist, and I’m not sure why.

  • As a side effect, you can search for quotes of a given user’s tweets by searching for twitter.com/username/status, because all tweet URLs begin with that prefix. This will also include any tweets from that user that have photos or video attached, because attaching media appends a photo URL, but you can fix that by adding -from:username.

  • Searching for to:foo will only find tweets that begin with @foo; dot-replies and other mentions are not included. Searching for @foo will find mentions as well as tweets from that person. To find only someone’s mentions, you can search for @foo -from:foo. You can combine this with the above trick to find quotes as well.

  • I’ve been told that from: only applies to the handle a user had when the tweet was made (i.e. doesn’t take renames into account), but this doesn’t match my own experience. It’s possible the behavior is different depending on whether the old handle has been reclaimed by someone else.

  • Some clients, such as TweetDeck, support showing live feeds of search results right alongside your timeline and notifications. It’s therefore possible for people to keep an eye on a live stream of everyone who’s talking about them, even when their @handle isn’t mentioned. Bear this in mind when grumbling, especially about people whose attention you’d prefer to avoid.

  • Namesearch — that is, look for mentions of you or your work that don’t actually @-mention you — with caution. Liking or replying amicably to tweets that compliment you is probably okay. Starting arguments with people who dislike your work is rude and kind of creepy, and certainly not likely to improve anyone’s impression of you.

Locked accounts

  • You may set your account to private, which will hide your tweets from the general public. Only people who follow you will be able to see your tweets. Twitter calls this “protected”, but since it shows a lock icon next to your handle, everyone calls it “locked”.

  • Specifically: your banner, avatar, display name, and bio (including location, website, etc.) are still public. The number of tweets, follows, followers, likes, and lists you have are also public. Your actual tweets, media, follows, followers, lists, etc. are all hidden.

  • iOS Twitter hides the bio and numbers, as well, which is sort of inconvenient if you were using it to explain who you are and who you’re cool with having follow you.

  • When you lock your account, any existing followers will remain. Anyone else will only be able to send a follow request, which you can then approve or deny. You can force anyone to unfollow you at any time (whether locked or not) by blocking and then unblocking them. Or just blocking them.

  • Follow requests are easy to miss; only a few places in the UI make a point of telling you when you have new ones.

  • Approving or denying a follow request doesn’t directly notify the requester. If you approve, obviously they’ll start seeing your tweets in their timeline. If you deny, the only difference is that if they look at your profile again, the follow button will no longer say “pending”.

  • If you unlock your account, any pending follow requests are automatically accepted.

  • The only way to see your pending follows (accounts you have tried to follow that haven’t yet accepted) is via the API, or a client that makes use of the API. The official clients don’t show them anywhere.

  • No one can retweet a locked account, not even followers.

  • Quoting doesn’t work with locked account; the quoted tweet will only show the “unavailable” message, even if a locked account quotes itself. Clicking the tweet link will still work, of course, as long as you follow the quoted account.

  • Locked accounts never create notifications for people who aren’t following them. A locked account can like, retweet, quote, follow, etc. as usual, and the various numbers will go up, but only their followers will be notified.

  • A locked account can reply to another account that doesn’t follow them, but that account won’t have any way to tell. However, an unlocked third party that follows both accounts could then make another reply, which would prefill both @handles, and (if left unchanged) alert the other account to the locked account’s presence.

  • Similarly, if a locked account retweets a public account, anyone who tries to reply to the retweet will get the locked account’s @handle prefilled.

  • If a locked account likes some of your tweets (or retweets you, or replies, etc.), and then you follow them, you won’t see retroactive notifications for that activity. Notifications from accounts you can’t see are never created in the first place, not merely hidden from your view. Of course, you’re free to look through their tweets and likes manually once you follow them.

  • Locked accounts do not appear in the lists of who liked or retweeted a tweet (except, of course, when viewed by someone following them). Web Twitter will hint at this by saying something akin to “X users have asked not to be shown in this view.” at the bottom of such a list.

  • While a locked account’s own follows and followers are hidden, a locked account will still appear publicly in the following/follower lists of other unlocked accounts. There is no blessed way to automatically cross-reference this, but be aware that the existence of a locked account is still public. In particular, if you follow someone who keeps an eye on their follower count, they can just look at their own list of followers to find you.

  • Anyone can still mention a locked account, whether or not they follow it, and it’ll receive notifications.

  • Open DMs (“receive direct messages from anyone”) work as normal for locked accounts. A locked account can send DMs to anyone with open DMs, and a locked account may turn on open DMs to receive DMs from anyone.

  • Replies to a locked account are not protected in any way. If a locked account participates in a thread, its own tweets will be hidden from non-followers, but any public tweets will be left in. Also, anyone can search for mentions of a locked account to find conversations it’s participated in, and may be able to infer what the locked account was saying from context.

API, other clients, etc.

I’ve mentioned issues with non-primary clients throughout, but a couple more things to be aware of:

  • Web Twitter has some keyboard shortcuts, which you can view by pressing ?.

  • When I say Web Twitter throughout this document, I mean desktop Web Twitter; there’s also a mobile Web Twitter, which is much simpler.

  • The official API doesn’t support a number of Twitter features, including polls, ads, and DMs with multiple participants. Clients that use the API (i.e. clients not made by Twitter) thus cannot support these features.

  • Even TweetDeck, which is maintained by Twitter, frequently lags behind in feature support. TweetDeck had the original (client-side-only) implementation of muting, but even after Twitter added it as a real feature, TweetDeck was never changed to make use of it. So TweetDeck’s muting is separate from Twitter’s muting.

  • Tweets know what client they were sent from. Official Twitter apps don’t show this any more, but it’s still available in the API, and some alternative clients show it.

  • By default, Twitter allows people to find your account by searching for your email address or phone number. You may wish to turn this off.

  • Twitter has a “collections” feature, which lets you put any public tweets you like (even other people’s) in a group for other people to look over. However, no primary client lets you create one; you have to do it via the API, the second-party client TweetDeck, the somewhat convoluted Curator that seems more aimed at news media and business, or a third-party client. Collections aren’t listed anywhere public (you have to link to them directly) — the only place to see even a list of your own collections via primary means is the “Collection” tab when creating a new widget on the web. Tweets in a collection are by default shown in the order you added them, newest first; the API allows reordering them, and Curator supports dragging to reorder, but TweetDeck doesn’t support reordering at all.

  • Lists are a thing. I’ve never really used them. They don’t support a lot of the features the regular timeline does; for example, threaded tweets aren’t shown together, and lists don’t provide access to locked accounts. You can create a private list and add people to it to follow them without their knowledge, though.

  • You can “promote” a tweet, i.e. turn it into an ad, which is generally only of interest to advertisers. However, promoted tweets have the curious property that they don’t appear on your profile or in your likes or in search results for anyone. It’s possible to target a promoted tweet at a specific list of users (or no one!), which allows for a couple creative hacks that you’ll have to imagine yourself.

  • And then there’s the verified checkmark (given out arbitrarily), the power tools given to verified users (mysterious), the limits on duplicate tweets and follows and whatnot (pretty high), the analytics tools (pretty but pointless), the secret API-only notifications (Twitter tells you when your tweet is unfavorited!), the Web Twitter metadata that let me write a hack to hide mentions from non-followers… you get the idea.

How to Govern Your Application Deployments by Using Amazon EC2 Container Service and Docker

Post Syndicated from Michael Capicotto original https://blogs.aws.amazon.com/security/post/Tx3UTL7PQ6796V5/How-to-Govern-Your-Application-Deployments-by-Using-Amazon-EC2-Container-Service

Governance among IT teams has become increasingly challenging, especially when dealing with application deployments that involve many different technologies. For example, consider the case of trying to collocate multiple applications on a shared operating system. Accidental conflicts can stem from the applications themselves, or the underlying libraries and network ports they rely on. The likelihood of conflicts is heightened even further when security functionality is involved, such as intrusion prevention or access logging. Such concerns have typically resulted in security functions being relegated to their own independent operating systems via physical or virtual hardware isolation (for example, an inline firewall device).

In this blog post, I will show you how to eliminate these potential conflicts while also deploying your applications in a continuous and secure manner at scale. We will do this by collocating different applications on the same operating system, through the use of Amazon EC2 Container Service (ECS) and Docker.

Let’s start with a brief overview of Docker.

Docker explained

Simply put, Docker allows you to create containers, which wrap your applications into a complete file system and contain everything that this software needs to run. This means you can transport that container onto any environment running Docker, and it will run the same, while staying isolated from other containers and the host operating system. This isolation between containers eliminates any potential conflicts the applications may have with each other, because they are each running in their own separate run-time environments.

The hands-on portion of this post will focus on creating two Docker containers: one containing a simple web application (“application container”), and the other containing a reverse proxy with throttling enabled (“proxy container”), which is used to protect the web application. These containers will be collocated on the same underlying Amazon EC2 instance using ECS; however, all network traffic between the web application and the outside world will be forced through the proxy container, as shown in the following diagram. This tiered network access can be the basis of a security overlay solution in which a web application is not directly reachable from the network to which its underlying instance is connected. All inbound application requests are forced through a proxy container that throttles requests. In practice, this container could also perform activities such as filtering, logging, and intrusion detection.

Figure 1. Network isolation using Docker containers

Create your Docker containers

To start let’s create a Docker container that contains a simple PHP web application. This Docker container will represent the application container in the previous diagram. For a more detailed guide to the following steps, see Docker Basics.

Install Docker

Launch an instance with the Amazon Linux AMI. For more information, see Launching an Instance in the Amazon EC2 User Guide for Linux Instances.

Connect to your instance. For more information, see Connect to Your Linux Instance.

Update the installed packages and package cache on your instance:

[ec2-user ~]$ sudo yum update -y

Install Docker:

[ec2-user ~]$ sudo yum install -y docker

Start the Docker service:

[ec2-user ~]$ sudo service docker start

Add the ec2-user to the Docker group so that you can execute Docker commands without using sudo:

[ec2-user ~]$ sudo usermod -a -G docker ec2-user

Log out and log back in again to pick up the new Docker group permissions.
 

Verify that the ec2-user can run Docker commands without sudo:

[ec2-user ~]$ docker info

Sign up for a Docker Hub account

Docker uses images to launch containers, and these images are stored in repositories. The most common Docker image repository (and the default repository for the Docker daemon) is Docker Hub. Although you don’t need a Docker Hub account to use ECS or Docker, having a Docker Hub account gives you the freedom to store your modified Docker images so that you can use them in your ECS task definitions. Docker Hub offers public and private registries. You can create a private registry on Docker Hub and configure private registry authentication on your ECS container instances to use your private images in task definitions.

For more information about Docker Hub and to sign up for an account, go to https://hub.docker.com.

Create a Docker image containing a simple PHP application

Install git and use it to clone the simple PHP application from our GitHub repository onto your system:

[ec2-user ~]$ sudo yum install -y git

[ec2-user ~]$ git clone https://github.com/awslabs/ecs-demo-php-simple-app

Change directories to the ecs-demo-php-simple-app folder:

[ec2-user ~]$ cd ecs-demo-php-simple-app

Examine the Dockerfile in this folder:

[ec2-user ecs-demo-php-simple-app]$ cat Dockerfile

​A Dockerfile is a manifest that contains instructions about how to build your Docker image. For more information about Dockerfiles, go to the Dockerfile Reference.
 

Build the Docker image from our Dockerfile. Replace the placeholder user name with your Docker Hub user name (be sure to include the blank space and period at the end of the command):

[ec2-user ecs-demo-php-simple-app]$ docker build -t my-dockerhub-username/amazon-ecs-sample .

Run docker images to verify that the image was created correctly and that the image name contains a repository that you can push to (in this example, your Docker Hub user name):

[ec2-user ecs-demo-php-simple-app]$ docker images

Upload the Docker image to your Docker Hub account.

Log in to Docker Hub:

[ec2-user ecs-demo-php-simple-app]$ docker login

Check to ensure your login worked:

[ec2-user ecs-demo-php-simple-app]$ docker info

Push your image to Docker Hub:

[ec2-user ecs-demo-php-simple-app]$ docker push my-dockerhub-username/amazon-ecs-sample

Now that you’ve created this first Docker image, you can move on to create your second Docker image, which will be deployed into the proxy container.

Create a reverse proxy Docker image

For our second Docker image, you will build a reverse proxy using NGINX and enable throttling. This will allow you to simulate security functionality for the purpose of this blog post. In practice, this proxy container could contain any security-related software you desire, and could be produced by a security team and delivered to the team responsible for deployments as a standalone artifact.

Using SSH, connect to the Amazon Linux instance you used in the last section.

Ensure that the Docker service is running and you are logged in to your Docker Hub account (instructions in previous section).

Create a local directory called proxy-image, and switch into it.

In this directory, you will create two files. You can copy and paste the contents for each as follows.

First, create a file called Dockerfile. This is a file used to build a Docker image according to your specifications. Copy and paste the following contents into the file. You are using a base Ubuntu image, running an update command, installing NGINX (your reverse proxy), telling Docker to copy the other file from your local machine to the Docker image, and then finally exposing port 80 for HTTP traffic.

FROM ubuntu
RUN apt-get update && apt-get install -y nginx
COPY nginx.conf /etc/nginx/nginx.conf
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD service nginx start

Next, create a supporting file called nginx.conf. You want to overwrite the standard NGINX configuration file to ensure it is configured as a reverse proxy for all HTTP traffic. Throttling has been left out for the time being.

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
 
events {
     worker_connections 768;
     # multi_accept on;
}
 
http {
  server {
    listen               80;

# Proxy pass to servlet container

    location / {
      proxy_pass                http://application-container:80;
    }

  }
}

Now you are ready to build your proxy image. Run the following command with your specific Docker Hub information to instruct Docker to do so (be sure to include the blank space and period at the end of the command).

docker build -t my-dockerhub-username/proxy-image .

When this process completes, push your built image to Docker Hub.

docker push my-dockerhub-username/proxy-image

You have now successfully built both of your Docker images and pushed them to Docker Hub. You can now move on to deploying Docker images with Amazon ECS.

Deploy your Docker images with Amazon ECS

Amazon ECS is a container management service that allows you to manage and deploy Docker containers at scale. In this section, we will use ECS to deploy your two containers on a single instance. All inbound and outbound traffic from your application will be funneled through the proxy container, allowing you to enforce security measures on your application without modifying the application container in which it lives.

In the following diagram, you can see a visual representation of the ECS architecture we will be using. An ECS cluster is simply a logical grouping of container instances (we are just using one) that you will deploy your containers onto. A task definition specifies one or more container definitions, including which Docker image to use, port mappings, and more. This task definition allows you to model your application by having different containers work together. An instantiation of this task definition is called a task.

Figure 2. The ECS architecture described in this blog post

Use ECS to deploy your containers

Now that we have both Docker images built and stored in the Docker Hub repository, we can use ECS to deploy these containers.

Create your ECS task definition

Navigate to the AWS Management Console, and then to the EC2 Container Service page.

If you haven’t used ECS before, you should see a page with a Get started button. Click this button, and then click Cancel at the bottom of the page. If you have used ECS before, go straight to Step b.

Click Task Definitions in the left sidebar, and then click Create a New Task Definition. 

Give your task definition a name, such as SecurityExampleTask.

Click Add Container, and set up the first container definition with the following parameters, inserting the path to both your proxy image stored in Docker Hub (in other words, username/proxy-image), and the path to your web application image in Docker Hub (in the Links box). Don’t forget to click Advanced container configuration and complete all the fields.

Container Name: proxy-container
Image: username/proxy-image
Memory: 256
Port Mappings
Host port: 80
Container port: 80
Protocol: tcp
CPU: 256
Links: application-container

After you have populated the fields, click Add. Then, repeat the same process for the application container according to the following specifications. Note that the application container does not need a link back to the proxy container—doing this one way will suffice for this example.

Container Name: application-container
Image: username/amazon-ecs-sample
Memory: 256
CPU: 256

After you have populated the fields, click Add. Now, click the Configure via JSON tab to see the task definition that you have created. When you are done viewing this, click Create.

Now that you have created your task definition, you can move on to the next step.

Deploy an ECS container instance

In the ECS console, click Clusters in the left sidebar. If a cluster called default does not already exist, click Create Cluster and create a cluster called default (case sensitive).

Launch an instance with an ECS-optimized Amazon Machine Image (AMI), ensuring it has a public IP address and a path to the Internet. For more information, see Launching an Amazon ECS Container Instance. This is the instance onto which you’ll deploy your Docker images.

When your instance is up and running, navigate to the ECS section of the AWS Management Console, and click Clusters.

Click the cluster called default. You should see your instance under the ECS Instances tab. After you have verified this, you can move on to the next step.

Run your ECS task

Navigate to the Task Definitions tab on the left of the AWS Management Console, and select the check box next to the task definition you created. Click Actions, and then select Run Task.

On the next page, ensure the cluster is set to Default and the number of tasks is 1, and then click Run Task.

After the process completes, click the Clusters tab on the left of the AWS Management Console, select the default cluster, and then click the Tasks tab. Here, you can see your running task. It should have a green Running status. After you have verified this, you can proceed to the next step. If you see a Pending status, the task is still being deployed.

Click the ECS Instances tab, where you should see the container instance that you created earlier. Click the container instance to get more information, including its public IP address. If you copy and paste this public IP address into your browser’s address bar, you should see your sample PHP website!

If you do not see your PHP website, first ensure you have built your web application correctly by following the steps above in “Creating a Docker image containing a simple PHP application,” including pushing this image to the Docker Hub. Then, ensure your task is in the green Running state.

Try to refresh the page a couple times, and you will notice that no throttling is currently taking place. To fix this, make a slight modification. First, sign back in to the Amazon Linux instance where you built the two Docker images, and navigate to the proxy-image directory. Change the nginx.conf file to look like the following example. (Notice that a couple lines [highlighted] have been added to throttle requests to 3 per minute. This is an extremely low rate and is used only to show a working solution during this example.)

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
 
events {
     worker_connections 768;
     # multi_accept on;
}
 
http {
  limit_req_zone  $binary_remote_addr  zone=one:10m   rate=3r/m;
  server {
    listen               80;
 
    # Proxy pass to servlet container
 
    location / {
      proxy_pass                http://application-container:80;
      limit_req zone=one burst=5 nodelay;
    }
  }
}

Following the same steps you followed earlier in “Create a reverse proxy Docker image” (specifically steps 5 and 6), rebuild the proxy image and push it to your Docker Hub. Now, stop the task that is currently running in the ECS console, and deploy it again by selecting Run New Task and choosing the same task as before. This will pick up the new image that you pushed to Docker Hub.

When you see a status of Running next to the task in the ECS console, paste the container instance’s public IP into your browser’s address bar again. You should see the sample PHP website. Refresh the page and wait for it to load, repeating this process a few times. On the fourth refresh, an error should be shown, and the page is not displayed. This means your throttling is functioning correctly.

Congratulations on completing this walkthrough!

Closing Notes

In this example, we performed Docker image creation and manual task definition creation. (Also see Set up a build pipeline with Jenkins and Amazon ECS on the AWS Application Management Blog for a walkthrough of how to automate Docker task definition creation using Jenkins.) This separation between image definition and deployment configuration can be leveraged for governance purposes. The web and security tiers can be owned by different teams that produce Docker images as artifacts, and a third team can ensure that the two tiers are deployed alongside one another and in this specific tiered-network configuration.

We hope that you find this example of leveraging a deployment service for security and governance purposes useful. You can find additional examples of application configuration options in the AWS Application Management Blog.

We look forward to hearing about your use of this strategy in your organization’s deployment process. If you have questions or comments, post them below or on the EC2 forum.

– Michael

Building a Near Real-Time Discovery Platform with AWS

Post Syndicated from Assaf Mentzer original https://blogs.aws.amazon.com/bigdata/post/Tx1Z6IF7NA8ELQ9/Building-a-Near-Real-Time-Discovery-Platform-with-AWS

Assaf Mentzer is a Senior Consultant for AWS Professional Services

In the spirit of the U.S presidential election of 2016, in this post I use Twitter public streams to analyze the candidates’ performance, both Republican and Democrat, in a near real-time fashion. I show you how to integrate AWS managed services—Amazon Kinesis Firehose, AWS Lambda (Python function), and Amazon Elasticsearch Service—to create an end-to-end, near real-time discovery platform.

The following screenshot is an example of a Kibana dashboard on top of geo-tagged tweet data. This screenshot was taken during the fourth republican presidential debate (November 10th, 2015).

Kibana dashboard on top of geotagged tweet data

The dashboard shows tweet data related to the presidential candidates (only tweets that contain a candidate’s name):

Top 10 Twitter mentions (@username) – you can see that Donald Trump is the most mentioned candidate

Sentiment analysis

Map visualization – Washington DC is the most active area

The dashboard has drill-down capabilities; choosing one of the sentiments in the pie chart or one of the @mentions in the bar chart changes the view of the entire dashboard accordingly. For example, you can see the sentiment analysis and geographic distribution for a specific candidate. The dashboard shows data from the last hour, and is configured to refresh the data every 30 seconds.

Because the platform built in this post collects all geo-tagged public Twitter data and filters data only in the dashboard layer, you can use the same solution for other use cases by just changing the filter search terms.

Use same solution for other use cases by changing filter search terms

Architecture

This platform has the following architecture:

A producer device (in this case, the Twitter feed) puts data into Amazon Kinesis Firehose.

Firehose automatically buffers the data (in this case, 5MB size or 5 minutes interval, whichever condition is satisfied first) and delivers the data to Amazon S3.

A Python Lambda function is triggered when a new file is created on S3 and indexes the S3 file content to Amazon Elasticsearch Service.

The Kibana application runs on top of the Elasticsearch index to provide a visual display of the data.

Platform architecture

Important: Streaming data can be pushed directly to Amazon Elasticsearch Service. The architecture described in this post is recommended when data has to be persisted on S3 for further batch/advanced analysis (lambda architecture,not related to AWS Lambda)  in addition to the near-real-time analysis on top of Elasticsearch Service, which might retain only “hot data” (last x hours).

Prerequisites

To create this platform, you’ll need an AWS account and a Twitter application. Sign in with your Twitter account and create a new application at https://apps.twitter.com/. Make sure your application is set for ‘read-only’ access and then choose Create My Access Token at the bottom of the Keys and Access Tokens tab. By this point, you should have four Twitter application keys: consumer key (API key), consumer secret (API secret), access token, and access token secret. Write down these keys.

Create Amazon Elasticsearch Service cluster

Start by creating an Amazon Elasticsearch Service cluster that will hold your data for near real-time analysis. Elasticsearch Service includes built-in support for Kibana, which is used for visualization on top of Elasticsearch Service.

Sign in to the Amazon Elasticsearch Service console.

Choose Create a new domain (or Get Started, if this is your first time in the console).

Name your domain “es-twitter-demo” and choose Next.

Keep the default selections and choose Next.

Choose the Allow open access to the domain template for the access policy and click Next.

Note: This is not a recommended approach, and should only be used for this demo. Please read the documentation for how to setup the proper permissions.

Choose Confirm and create.

Within ~10 minutes, your domain is ready. When the creation process has reached a status of Active, the domain should be associated with both an Elasticsearch Service endpoint and a Kibana URL, which you need to store for later steps.

Your domain is ready

Create an IAM role for Firehose

Use a Firehose delivery stream to ingest the Twitter streaming data and put it to Amazon S3. Before you can ingest the data into Firehose, you need to set up an IAM role to allow Firehose to call AWS services on your behalf. In this example, the Twitter feed which is your producer application creates the Firehose delivery stream based on the IAM role.

Create a new IAM role named “firehose_delivery_role” based on the following policy (please replace A_BUCKET_YOU_SETUP_FOR_THIS_DEMO with your S3 bucket):

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::A_BUCKET_YOU_SETUP_FOR_THIS_DEMO/*"
]
}
]
}

Create a Lambda function

For this example, use a Python function (lambda_function.py) that is triggered when a new file is created on S3. The function does the following:

Reads the file content

Parses the content to JSON format (Elasticsearch Service stores documents in JSON format).

Analyzes Twitter data (tweet_utils.py):

Extracts Twitter mentions (@username) from the tweet text.

Extracts sentiment based on emoticons. If there’s no emoticon in the text the function uses textblob sentiment analysis.

Loads the data to Elasticsearch Service (twitter_to_es.py) using the elasticsearch-py library.

The Python code is available in aws-big-data-blog repository.

Download the deployment package and unzip to the s3-twitter-to-es-python folder.

Modify the s3-twitter-to-es-python/config.py file by changing the value of es_host to the Elasticsearch Service endpoint of your domain.

Zip the folder content on your local environment as my-s3-twitter-to-es-python.zip (important: zip the folder content, not the folder itself).

Sign in to the Lambda console.

Choose Create a Lambda function (or Get started now if this is your first time using the service).

Choose Skip in the blueprints screen.

Name your function (e.g., s3-twitter-to-es-python).

Choose Python 2.7 runtime and upload the zip file my-s3-twitter-to-es-python.zip.

Make sure the Handler field value is lambda_function.lambda_handler.

Choose lambda_s3_exec_role (if this value does not exist, choose Create new role S3 execution role).

Keep memory at 128MB and choose a 2min timeout.

Choose Next and Create function, then wait until the function is created.

On the Event sources tab, choose Add event source.

Choose the event source type S3, select the bucket, and choose the event type Object Created (All).

Enter a value for S3 Prefix (e.g., twitter/raw-data/) to ensure the function doesn’t trigger when data is uploaded elsewhere in the bucket.

Make sure that the event source is enabled and click Submit.

Feed the producer with Twitter streaming data

Your producer is a Node.js application that connects to the Twitter feed via the Twitter stream API and puts the streaming data into Firehose. The code is available aws-big-data-blog repository.

To use the producer application, you have to install Node.js (go to https://nodejs.org to install it on your local machine). Alternatively, you can launch a t2.micro EC2 instance based on the Amazon Linux AMI and run the following command:

sudo yum -y install nodejs npm –enablerepo=epel

Download the application, unzip the file, and run npm install from the twitter-streaming-firehose-nodejs  folder.

Modify the Config.js file with your settings (change <YOUR PARAMETERS> as follows:

firehose

DeliveryStreamName – Name your stream. The app creates the delivery stream if it does not exist.

BucketARN: Use the bucket matched to the Lambda function.

RoleARN: Get your account ID from the IAM dashboard users sign-in link https://Your_AWS_Account_ID.signin.aws.amazon.com/console/. Use the Firehose role you created earlier (“firehose_delivery_role”).

Prefix: Use the same s3 prefix that you used in your Lambda function event source (e.g., twitter/raw-data/).

twitter – Enter your twitter application keys.

region – Enter your Firehose region (e.g., us-east-1, us-west-2, eu-west-1).

Make sure your aws credentials are configured under <HOME FOLDER>/.aws/credentials as follows:

[default]
aws_access_key_id=
aws_secret_access_key=

Now that your Config.js file is modified, you can open a console window and initiate execution of your program by running the following command:

node twitter_stream_producer_app

Wait a few seconds until the delivery stream is active, and then you should see Twitter data on your screen. The app collect tweets from the US but you can modify the locations in Config.js file. For more information, go to twitter geolocation.

Discover and analyze data

Wait a few minutes until Firehose has time to deliver enough files to Amazon S3 to make it interesting to review. The files should appear under the following bucket:

s3://<bucket>/<prefix>/<year>/<month>/<day>/<hour>/

Open Kibana in your browser using your Kibana URL. To start discovering the data stored in Elasticsearch Service, you need to create an index pattern pointing to your Elasticsearch index, which is like a ‘database’ in a relational database. For more information, go to What is an Elasticsearch Index?.

Create an index pattern as follows:

Create an index pattern

On the Discover tab, choose Add near the text field on the left sidebar. You should get the following result:

Start exploring the data by choosing any field in the left sidebar and filter. You can search for a specific term by replacing the asterisk (*) in the search field with your terms. You can also filter by time by choosing the Time Filter icon at the top right.

For example, you can search for the term “Trump” to discover and understand the data related to one of the candidates.

search for the term Trump to discover and understand the data related to one of the candidates

In this 2016 election discovery platform, you can analyze the performance of the presidential candidates: How many tweets they got, the sentiment of those tweets (positive/negative/neutral/confused), and how the tweets are geographically distributed (identifying politically active areas).

Because this is a near real-time discovery platform, you can measure the immediate impact of political events on the candidates’ popularity (for example, during a political debate).

Create a dashboard

To visualize candidates’ popularity in Twitter (in how many tweets a candidate was mentioned), create a top mentions bar chart.

On the Discover tab, choose the mentions field on the left sidebar.

Choose Visualize (ignore the warning).

Choose Visualize

On the X-Axis tab, change the size from 20 to 10 and choose Apply.

Choose the Save Visualization icon at the top right.

Enter a name and choose Save.

To analyze how tweets related to the 2016 election are geographically distributed in order to identify politically active areas, create a tile map.

On the Discover tab, choose the coordinates.coordinates field.

Choose Visualize.

Note: By default, in the Node.js app, tweets are collected only from the U.S.

To center the map, choose the crop  icon.

Choose Save Visualization.

To identify candidates’ popularity (or unpopularity), visualize the sentiments field. Because there are only 4 potential values (positive/negative/neutral/confused), you can use a pie chart visualization.

On the Visualize tab, choose the New Visualization icon ().

Choose Pie chart.

Choose new search, Split Slices, Terms aggregation, and the sentiments field.

Choose Apply and Save Visualization.

Combine all the visualizations into a single dashboard.

On the Dashboard tab, choose Add Visualization () at the top right corner, and select a visualization.

Repeat the previous step for all other visualizations.

Choose Save Dashboard, enter a name for your dashboard, and choose Save.

Now you can search for the presidential candidates in the data. Put the following search terms in the search filter field:

realDonaldTrump,realBenCarson,JebBush,tedcruz,ChrisChristie,JohnKasich,
GovMikeHuckabee,RandPaul,MarcoRubio,CarlyFiorina,JebBush,HillaryClinton,
MartinOMalley,BernieSanders

Search for candidates in the data

You’ve got yourself a dashboard! Select your preferred candidate in the bar chart to drill down to performance.

Conclusion

AWS managed services, like Amazon Kinesis Firehose, AWS Lambda, and Amazon Elasticsearch Service, take care of provisioning and maintaining the infrastructure components when building near real time applications and enable you to focus on your business logic.

You can quickly and easily tie these services together to create a near real-time discovery platform. For this post, we analyzed the performance of the 2016 presidential candidates, but this type of platform can be used for a variety of other use cases.

If you have questions or suggestions, please leave a comment below.

————————–

Related

Getting Started with Elasticsearch and Kibana on Amazon EMR

 

Building a Near Real-Time Discovery Platform with AWS

Post Syndicated from Assaf Mentzer original https://blogs.aws.amazon.com/bigdata/post/Tx1Z6IF7NA8ELQ9/Building-a-Near-Real-Time-Discovery-Platform-with-AWS

Assaf Mentzer is a Senior Consultant for AWS Professional Services

In the spirit of the U.S presidential election of 2016, in this post I use Twitter public streams to analyze the candidates’ performance, both Republican and Democrat, in a near real-time fashion. I show you how to integrate AWS managed services—Amazon Kinesis Firehose, AWS Lambda (Python function), and Amazon Elasticsearch Service—to create an end-to-end, near real-time discovery platform.

The following screenshot is an example of a Kibana dashboard on top of geo-tagged tweet data. This screenshot was taken during the fourth republican presidential debate (November 10th, 2015).

Kibana dashboard on top of geotagged tweet data

The dashboard shows tweet data related to the presidential candidates (only tweets that contain a candidate’s name):

Top 10 Twitter mentions (@username) – you can see that Donald Trump is the most mentioned candidate

Sentiment analysis

Map visualization – Washington DC is the most active area

The dashboard has drill-down capabilities; choosing one of the sentiments in the pie chart or one of the @mentions in the bar chart changes the view of the entire dashboard accordingly. For example, you can see the sentiment analysis and geographic distribution for a specific candidate. The dashboard shows data from the last hour, and is configured to refresh the data every 30 seconds.

Because the platform built in this post collects all geo-tagged public Twitter data and filters data only in the dashboard layer, you can use the same solution for other use cases by just changing the filter search terms.

Use same solution for other use cases by changing filter search terms

Architecture

This platform has the following architecture:

A producer device (in this case, the Twitter feed) puts data into Amazon Kinesis Firehose.

Firehose automatically buffers the data (in this case, 5MB size or 5 minutes interval, whichever condition is satisfied first) and delivers the data to Amazon S3.

A Python Lambda function is triggered when a new file is created on S3 and indexes the S3 file content to Amazon Elasticsearch Service.

The Kibana application runs on top of the Elasticsearch index to provide a visual display of the data.

Platform architecture

Important: Streaming data can be pushed directly to Amazon Elasticsearch Service. The architecture described in this post is recommended when data has to be persisted on S3 for further batch/advanced analysis (lambda architecture,not related to AWS Lambda)  in addition to the near-real-time analysis on top of Elasticsearch Service, which might retain only “hot data” (last x hours).

Prerequisites

To create this platform, you’ll need an AWS account and a Twitter application. Sign in with your Twitter account and create a new application at https://apps.twitter.com/. Make sure your application is set for ‘read-only’ access and then choose Create My Access Token at the bottom of the Keys and Access Tokens tab. By this point, you should have four Twitter application keys: consumer key (API key), consumer secret (API secret), access token, and access token secret. Write down these keys.

Create Amazon Elasticsearch Service cluster

Start by creating an Amazon Elasticsearch Service cluster that will hold your data for near real-time analysis. Elasticsearch Service includes built-in support for Kibana, which is used for visualization on top of Elasticsearch Service.

Sign in to the Amazon Elasticsearch Service console.

Choose Create a new domain (or Get Started, if this is your first time in the console).

Name your domain “es-twitter-demo” and choose Next.

Keep the default selections and choose Next.

Choose the Allow open access to the domain template for the access policy and click Next.

Note: This is not a recommended approach, and should only be used for this demo. Please read the documentation for how to setup the proper permissions.

Choose Confirm and create.

Within ~10 minutes, your domain is ready. When the creation process has reached a status of Active, the domain should be associated with both an Elasticsearch Service endpoint and a Kibana URL, which you need to store for later steps.

Your domain is ready

Create an IAM role for Firehose

Use a Firehose delivery stream to ingest the Twitter streaming data and put it to Amazon S3. Before you can ingest the data into Firehose, you need to set up an IAM role to allow Firehose to call AWS services on your behalf. In this example, the Twitter feed which is your producer application creates the Firehose delivery stream based on the IAM role.

Create a new IAM role named “firehose_delivery_role” based on the following policy (please replace A_BUCKET_YOU_SETUP_FOR_THIS_DEMO with your S3 bucket):

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::A_BUCKET_YOU_SETUP_FOR_THIS_DEMO/*"
]
}
]
}

Create a Lambda function

For this example, use a Python function (lambda_function.py) that is triggered when a new file is created on S3. The function does the following:

Reads the file content

Parses the content to JSON format (Elasticsearch Service stores documents in JSON format).

Analyzes Twitter data (tweet_utils.py):

Extracts Twitter mentions (@username) from the tweet text.

Extracts sentiment based on emoticons. If there’s no emoticon in the text the function uses textblob sentiment analysis.

Loads the data to Elasticsearch Service (twitter_to_es.py) using the elasticsearch-py library.

The Python code is available in aws-big-data-blog repository.

Download the deployment package and unzip to the s3-twitter-to-es-python folder.

Modify the s3-twitter-to-es-python/config.py file by changing the value of es_host to the Elasticsearch Service endpoint of your domain.

Zip the folder content on your local environment as my-s3-twitter-to-es-python.zip (important: zip the folder content, not the folder itself).

Sign in to the Lambda console.

Choose Create a Lambda function (or Get started now if this is your first time using the service).

Choose Skip in the blueprints screen.

Name your function (e.g., s3-twitter-to-es-python).

Choose Python 2.7 runtime and upload the zip file my-s3-twitter-to-es-python.zip.

Make sure the Handler field value is lambda_function.lambda_handler.

Choose lambda_s3_exec_role (if this value does not exist, choose Create new role S3 execution role).

Keep memory at 128MB and choose a 2min timeout.

Choose Next and Create function, then wait until the function is created.

On the Event sources tab, choose Add event source.

Choose the event source type S3, select the bucket, and choose the event type Object Created (All).

Enter a value for S3 Prefix (e.g., twitter/raw-data/) to ensure the function doesn’t trigger when data is uploaded elsewhere in the bucket.

Make sure that the event source is enabled and click Submit.

Feed the producer with Twitter streaming data

Your producer is a Node.js application that connects to the Twitter feed via the Twitter stream API and puts the streaming data into Firehose. The code is available aws-big-data-blog repository.

To use the producer application, you have to install Node.js (go to https://nodejs.org to install it on your local machine). Alternatively, you can launch a t2.micro EC2 instance based on the Amazon Linux AMI and run the following command:

sudo yum -y install nodejs npm –enablerepo=epel

Download the application, unzip the file, and run npm install from the twitter-streaming-firehose-nodejs  folder.

Modify the Config.js file with your settings (change <YOUR PARAMETERS> as follows:

firehose

DeliveryStreamName – Name your stream. The app creates the delivery stream if it does not exist.

BucketARN: Use the bucket matched to the Lambda function.

RoleARN: Get your account ID from the IAM dashboard users sign-in link https://Your_AWS_Account_ID.signin.aws.amazon.com/console/. Use the Firehose role you created earlier (“firehose_delivery_role”).

Prefix: Use the same s3 prefix that you used in your Lambda function event source (e.g., twitter/raw-data/).

twitter – Enter your twitter application keys.

region – Enter your Firehose region (e.g., us-east-1, us-west-2, eu-west-1).

Make sure your aws credentials are configured under <HOME FOLDER>/.aws/credentials as follows:

[default]
aws_access_key_id=
aws_secret_access_key=

Now that your Config.js file is modified, you can open a console window and initiate execution of your program by running the following command:

node twitter_stream_producer_app

Wait a few seconds until the delivery stream is active, and then you should see Twitter data on your screen. The app collect tweets from the US but you can modify the locations in Config.js file. For more information, go to twitter geolocation.

Discover and analyze data

Wait a few minutes until Firehose has time to deliver enough files to Amazon S3 to make it interesting to review. The files should appear under the following bucket:

s3://<bucket>/<prefix>/<year>/<month>/<day>/<hour>/

Open Kibana in your browser using your Kibana URL. To start discovering the data stored in Elasticsearch Service, you need to create an index pattern pointing to your Elasticsearch index, which is like a ‘database’ in a relational database. For more information, go to What is an Elasticsearch Index?.

Create an index pattern as follows:

Create an index pattern

On the Discover tab, choose Add near the text field on the left sidebar. You should get the following result:

Start exploring the data by choosing any field in the left sidebar and filter. You can search for a specific term by replacing the asterisk (*) in the search field with your terms. You can also filter by time by choosing the Time Filter icon at the top right.

For example, you can search for the term “Trump” to discover and understand the data related to one of the candidates.

search for the term Trump to discover and understand the data related to one of the candidates

In this 2016 election discovery platform, you can analyze the performance of the presidential candidates: How many tweets they got, the sentiment of those tweets (positive/negative/neutral/confused), and how the tweets are geographically distributed (identifying politically active areas).

Because this is a near real-time discovery platform, you can measure the immediate impact of political events on the candidates’ popularity (for example, during a political debate).

Create a dashboard

To visualize candidates’ popularity in Twitter (in how many tweets a candidate was mentioned), create a top mentions bar chart.

On the Discover tab, choose the mentions field on the left sidebar.

Choose Visualize (ignore the warning).

Choose Visualize

On the X-Axis tab, change the size from 20 to 10 and choose Apply.

Choose the Save Visualization icon at the top right.

Enter a name and choose Save.

To analyze how tweets related to the 2016 election are geographically distributed in order to identify politically active areas, create a tile map.

On the Discover tab, choose the coordinates.coordinates field.

Choose Visualize.

Note: By default, in the Node.js app, tweets are collected only from the U.S.

To center the map, choose the crop  icon.

Choose Save Visualization.

To identify candidates’ popularity (or unpopularity), visualize the sentiments field. Because there are only 4 potential values (positive/negative/neutral/confused), you can use a pie chart visualization.

On the Visualize tab, choose the New Visualization icon ().

Choose Pie chart.

Choose new search, Split Slices, Terms aggregation, and the sentiments field.

Choose Apply and Save Visualization.

Combine all the visualizations into a single dashboard.

On the Dashboard tab, choose Add Visualization () at the top right corner, and select a visualization.

Repeat the previous step for all other visualizations.

Choose Save Dashboard, enter a name for your dashboard, and choose Save.

Now you can search for the presidential candidates in the data. Put the following search terms in the search filter field:

realDonaldTrump,realBenCarson,JebBush,tedcruz,ChrisChristie,JohnKasich,
GovMikeHuckabee,RandPaul,MarcoRubio,CarlyFiorina,JebBush,HillaryClinton,
MartinOMalley,BernieSanders

Search for candidates in the data

You’ve got yourself a dashboard! Select your preferred candidate in the bar chart to drill down to performance.

Conclusion

AWS managed services, like Amazon Kinesis Firehose, AWS Lambda, and Amazon Elasticsearch Service, take care of provisioning and maintaining the infrastructure components when building near real time applications and enable you to focus on your business logic.

You can quickly and easily tie these services together to create a near real-time discovery platform. For this post, we analyzed the performance of the 2016 presidential candidates, but this type of platform can be used for a variety of other use cases.

If you have questions or suggestions, please leave a comment below.

————————–

Related

Getting Started with Elasticsearch and Kibana on Amazon EMR