Tag Archives: Developer tips

Don’t Reinvent Date Formats

Post Syndicated from Bozho original https://techblog.bozho.net/dont-reinvent-date-formats/

Microsoft Exchange has a bug that practically stops email. (The public sector is primarily using Exchange, so many of the institutions I’m responsible for as a minister, have their email “stuck”). The bug is described here, and fortunately, has a solution.

But let me say something simple and obvious: don’t reinvent date formats, please. When in doubt, use ISO 8601 or epoch millis (in UTC), or RFC 2822. Nothing else makes sense.

Certainly treating an int as a date is an abysmal idea (it doesn’t even save that much resources). 202201010000 is not a date format worth considering.

(As a side note, another advice – add automate tests for future timestamps. Sometiimes they catch odd behavior).

I’ll finish with Jon Skeet’s talk on dates, strings and numbers.

The post Don’t Reinvent Date Formats appeared first on Bozho's tech blog.

Simple Things That Are Actually Hard: User Authentication

Post Syndicated from Bozho original https://techblog.bozho.net/simple-things-that-are-actually-hard-user-authentication/

You build a system. User authentication is the component that is always there, regardless of the functionality of the system. And by now it should be simple to implement it – just “drag” some ready-to-use authentication module, or configure it with some basic options (e.g. Spring Security), and you’re done.

Well, no. It’s the most obvious thing and yet it’s extremely complicated to get right. It’s not just login form -> check username/password -> set cookie. It has a lot of other things to think about:

  • Cookie security – how to make it so that a cookie doesn’t leak or can’t be forged. Should you even have a cookie, or use some stateless approach like JWT, use SameSite lax or strict?
  • Bind cookie to IP and logout user if IP changes?
  • Password requirements – minimum length, special characters? UI to help with selecting a password?
  • Storing passwords in the database – bcrypt, scrypt, PBKDF2, SHA with multiple iterations?
  • Allow storing in the browser? Generally “yes”, but some applications deliberately hash it before sending it, so that it can’t be stored automatically
  • Email vs username – do you need a username at all? Should change of email be allowed?
  • Rate-limiting authentication attempts – how many failed logins should block the account, for how long, should admins get notifications or at least logs for locked accounts? Is the limit per IP, per account, a combination of those?
  • Captcha – do you need captcha at all, which one, and after how many attempts? Is Re-Captcha an option?
  • Password reset – password reset token database table or expiring links with HMAC? Rate-limit password reset?
  • SSO – should your service should support LDAP/ActiveDirectory authentication (probably yes), should it support SAML 2.0 or OpenID Connect, and if yes, which ones? Or all of them? Should it ONLY support SSO, rather than internal authentication?
  • 2FA – TOTP or other? Implement the whole 2FA flow, including enable/disable and use or backup codes; add option to not ask for 2FA for a particular device for a period of time>
  • Login by link – should the option to send a one-time login link be email be supported?
  • XSS protection – make sure no XSS vulnerabilities exist especially on the login page (but not only, as XSS can steal cookies)
  • Dedicated authentication log – keep a history of all logins, with time, IP, user agent
  • Force logout – is the ability to logout a logged-in device needed, how to implement it, e.g. with stateless tokens it’s not trivial.
  • Keeping a mobile device logged in – what should be stored client-side? (certainly not the password)
  • Working behind proxy – if the client IP matters (it does), make sure the X-Forwarded-For header is parsed
  • Capture login timezone for user and store it in the session to adjust times in the UI?
  • TLS Mutual authentication – if we need to support hardware token authentication with private key, we should enable TLS mutual. What should be in the truststore, does the web server support per-page mutual TLS or should we use a subdomain?

And that’s for the most obvious feature that every application has. No wonder it has been implemented incorrectly many, many times. The IT world is complex and nothing is simple. Sending email isn’t simple, authentication isn’t simple, logging isn’t simple. Working with strings and dates isn’t simple, sanitizing input and output isn’t simple.

We have done a poor job in building the frameworks and tools to help us with all those things. We can’t really ignore them, we have to think about them actively and take conscious, informed decisions.

The post Simple Things That Are Actually Hard: User Authentication appeared first on Bozho's tech blog.

Obtaining TLS Client Certificates In Spring Integration

Post Syndicated from Bozho original https://techblog.bozho.net/obtaining-tls-client-certificates-in-spring-integration/

Spring Integration is a very powerful and extensible framework for, well, integrations. But sometimes it’s not trivial how to get some information that yo need. In my case – a certificate used for mutual authentication in a TLS (syslog over TLS) connection. You have a Java method that receives a Message and ideally you’d want to get the certificate chain used by the client to authenticate itself (e.g. you may need to extract the CN).

Fortunately, Spring Integration is flexible. And it can be done, but it’s a bit convoluted. I’ll use XML notation, but the same can be achieved through Java config.

<bean id="nioConnectionSupport" class="com.yourcompany.util.net.TLSMutualNioConnectionSupport">
        <constructor-arg ref="sslContextSupport" />
        <constructor-arg value="false" />
</bean>
<bean id="interceptorFactoryChain" class="org.springframework.integration.ip.tcp.connection.TcpConnectionInterceptorFactoryChain">
        <property name="interceptors">
            <bean class="com.yourcompany.util.net.TLSSyslogInterceptorFactory" />
        </property>
</bean>

<int-ip:tcp-connection-factory id="tlsConnectionFactory" type="server" port="${tcp.tls.port}"
                                   using-nio="true" nio-connection-support="nioConnectionSupport"
                                   single-use="false" interceptor-factory-chain="interceptorFactoryChain" />

The sslContextSupport would typically be a org.springframework.integration.ip.tcp.connection.DefaultTcpSSLContextSupport or a custom implementation (e.g. if you want to use a “blind” trust store)

Then you’d need the two classes. You can check them at their respective gists: TLSSyslogInterceptorFactory and TLSMutualNioConnectionSupport.

What do these classes do? The TLSSyslogInterceptorFactory sets a new header for the message that contains the client ceritficates. The TLSMutualNioConnectionSupport class sets the “wantClientAuth” option on the SSL Engine. There is another option – “needClientAuth” which would for client authentication, rather than just support it. Depending on the use case you can use one or the other.

Then you can obtain the certificates at your handler method via:

Certificate[] certificates = (Certificate[]) message.getHeaders().get(TLSSyslogInterceptorFactory.TLS_CLIENT_CERTIFICATES);

A small tip I wanted to share to help the next one trying to achieve that.

The post Obtaining TLS Client Certificates In Spring Integration appeared first on Bozho's tech blog.

Every Serialization Framework Should Have Its Own Transient Annotation

Post Syndicated from Bozho original https://techblog.bozho.net/every-serialization-framework-should-have-its-own-transient-annotation/

We’ve all used dozens of serialization frameworks – for JSON, XML, binary, and ORMs (which are effectively serialization frameworks for relational databases). And there’s always the moment when you need to exclude some field from an object – make it “transient”.

So far so good, but then comes the point where one object is used by several serialization frameworks within the same project/runtime. That’s not necessarily the case, but let me discuss the two alternatives first:

  • Use the same object for all serializations (JSON/XML for APIs, binary serialization for internal archiving, ORM/database) – preferred if there are only minor differences between the serialized/persisted fields. Using the same object saves a lot of tedious transferring between DTOs.
  • Use different DTOs for different serializations – that becomes a necessity when scenarios become more complex and using the same object becomes a patchwork of customizations and exceptions

Note that both strategies can exist within the same project – there are simple objects and complex objects, and you can only have a variety of DTOs for the latter. But let’s discuss the first option.

If each serialization framework has its own “transient” annotation, it’s easy to tweak the serialization of one or two fields. More importantly, it will have predictable behavior. If not, then you may be forced to have separate DTOs even for classes where one field differs in behavior across the serialization targets.

For example the other day I had the following surprise – we use Java binary serialization (ObjectOutputStream) for some internal buffering of large collections, and the objects are then indexed. In a completely separate part of the application, objects of the same class get indexed with additional properties that are irrelevant for the binary serialization and therefore marked with the Java transient modifier. It turns out, GSON respects the “transient” modifier and these fields are never indexed.

In conclusion, this post has two points. The first is – expect any behavior from serialization frameworks and have tests to verify different serialization scenarios. And the second is for framework designers – don’t reuse transient modifiers/annotations from the language itself or from other frameworks, it’s counterintuitive.

The post Every Serialization Framework Should Have Its Own Transient Annotation appeared first on Bozho's tech blog.

List of Open Source Security Tools

Post Syndicated from Bozho original https://techblog.bozho.net/list-of-open-source-security-tools/

As a founder of a security company, I’m constantly looking for open source tools to either incorporate in our offering, or get inspiration from, or provide integration with. And there are dozens of great open source security tools, so I decided to publish a list of them. This plethora of options is one of the reasons that security is so hard – they are many different ways to achieve something and it almost always involves headaches with configuring and connecting various “point solutions” (as marketers call them). So here’s the list in on apparent order (note that I’ve listed only defensive tools, offensive ones like metasploit, nmap, wireshark, etc. probably deserve a separate post):

Security monitoring, intrusion detection/prevention

  • Suricata – intrusion detection system
  • Snort – intrusion detection system
  • Zeek – network security monitoring
  • OSSEC – host-based intrusion detection system
  • Wazuh – a more active fork of OSSEC
  • Velociraptor – endpoint visibility and response
  • OSSIM – open source SIEM, at the core of AlienVault
  • SecurityOnion – security monitoring and log management
  • Elastic SIEM – SIEM functionality by Elasticsearch
  • Mozdef – SIEM-like layer ontop of
    Elasticsearch
  • Sagan – log analytics and correlation
  • Apache Metron – (retired) network security monitoring, evolved from Cisco OpenSOC
  • Arkime – packet capture and search tool (formerly Moloch)
  • PRADAS – real-time asset detection
  • BloodHound – ActiveDirectory relationship detection

Threat intelligence

  • MISP – threat intelligence platform
  • SpiderFoot – threat intelligence aggregation
  • OpenCTI – threat intelligence platform
  • OpenDXL – open source tools for security intelligence sharing

Incident response

Vulnerability assessment

  • OpenVAS – very popular vulnerability assessment
  • ZAProxy – web vulnerability scanner by OWASP
  • WebScarab – (obsolete) web vulnerability scanner by OWASP
  • w3af – web vulnerability scanner
  • Loki – IoC scanner
  • CVE Search – set of tools for search in CVE data

Firewall

  • pfsense – the most popular open source firewall
  • OPNSense – hardenedBSD-based firewall
  • Smoothwall – linux-based Firewall

Antivirus / endpoint protection

Email security

I’m sure there are more (and I’d be happy to add them, e.g. this list suggested in reddit, or others in the reddit thread). Assessing each individual tool, its ease of use, its compliance aspects and the combination between multiple tools is a hard task (here’s a SANS paper on “stitching” multiple tools together). And making sense of the whole landscape (as I’ve tried previously) hints about the complexity of a security professional’s job.

The post List of Open Source Security Tools appeared first on Bozho's tech blog.

Always Name Your Thread Pools

Post Syndicated from Bozho original https://techblog.bozho.net/always-name-your-thread-pools/

Our software tends to use a lot of thread pools – mostly through java.util.concurrent.ExecutorService implementations (Created via Executors.new.... We create these for various async use-cases, and they can be seen all over the place. All of these executors have a thread factory. It’s hidden in the default factory method, but you can supply a thread factory. If not supplied, a default thread factory is used whenever a thread is needed.

When using spring, those can be created using <task:executor />. In that case, each executor service’s thread factory is provided by spring and it uses the name of the executor bean (specified with id="executorName"). But for those not created by spring, a default name is used which isn’t helpful and doesn’t let you differentiate threads by name.

And you need to differentiate threads by name – in case of performance issues you have various options to investigate: thread dumps and using the top command. In both cases it’s useful to know what function does a thread service, as the stacktrace in the dump might not always be revealing.

And my favorite tool for quick investigation is top. More precisely, top -H -p <pid>. This shows the usual top table, but the -H flag means that threads for the chosen process should be printed. You basically get the most CPU-heavy and currently active threads, by name. In those cases it’s extremely useful to have custom names.

But how do you set a name? By specifying a named thread factory when creating each executor. Here’s a stackoverflow answer with multiple ways to achieve thread naming.

The method that I’m using is based on the 2nd answer:

public class AsyncUtils {
    public static ThreadFactory createNamedThreadFactory(String name) {
        return new ThreadFactoryBuilder().setNameFormat(name + "-%d").build();
    }
}

Centrally managing all executors through spring might be a better idea, but not everyone is using spring and sometimes an executor is needed for a small piece of functionality that could even go outside spring beans. So it’s a good idea to have that method up your sleeve.

The post Always Name Your Thread Pools appeared first on Bozho's tech blog.

Connecting to Kibana Within an AWS VPC

Post Syndicated from Bozho original https://techblog.bozho.net/connecting-to-kibana-within-an-aws-vpc/

When you use the managed Elasticsearch service on AWS, you usually choose an encrypted connection (via KMS-managed keys), which means you can’t use just any tool to connect to your Elasticsearch cluster. In fact, in order to manually execute commands the easiest option is to use the built-in Kibana and its dev tools.

However, connecting to Kibana is also not trivial due to typical security precautions. Elasticsearch can be run outside or inside a VPC. If you run it outside a VPC, you have to modify its access policy to allow connections from a set of IPs (e.g. your office network).

But if you run it inside a VPC (which is recommended), you have to connect to the VPC. And you have a lot of options for that, but all of them are rather complicated and sometimes even introduce additional cost.

A much simpler approach is to connect via SSH to a machine in the VPC (typically your bastion/jump host) and use it as a SOCKS proxy for your browser. The steps are:

  1. Open an SSH tunnel. If you are using Windows, you can do it with PuTTy. On Linux, you can use ssh$ ssh -D 1337 -q -C -N [email protected]
  2. Set the SOCKS proxy in the browser. On Firefox, open Options and type “SOCKS”, you’ll have only one option (in Network options) to choose, and then set localhost, 1337 (or whatever port you’ve chosen). Here are the instruction for Chrome (or you can use a plugin)
  3. Open the Kibana URL in the browser. Note that now all your browser traffic will go through your VPC, so depending on the VPC configuration other websites might not work.

That’s it, a quick tip that might potentially save a lot of time trying to get a VPC connection to work.

The post Connecting to Kibana Within an AWS VPC appeared first on Bozho's tech blog.

Elasticsearch – Scalability and Multitenancy [slides]

Post Syndicated from Bozho original https://techblog.bozho.net/elasticsearch-scalability-and-multitenancy-slides/

Last week I gave a talk in a local tech group about my experience with Elasticsearch at LogSentinel, and how we achieve multitenancy and scalability.

Obviously, the topic of scalability is huge and it can’t be fully covered in 45 minutes, but I tried presenting the main aspects from the application perspective (I entirely skipped the Ops perspective, as it was a developer audience). The list of resources at the end of the slides show some of the sources of my “research” on the topic, which I recommend going through.

Below are the slides (the talk was not in English):

I hope it’s a useful intro to the topic and the main conclusion is – it’s counterintuitive if you are used to relational databases, and some internals (shards, Lucene segments) “leak” through the abstractions to influence the application design (as per the law of leaky abstractions).

The post Elasticsearch – Scalability and Multitenancy [slides] appeared first on Bozho's tech blog.

Content-Security-Policy Nonce with Spring Security

Post Syndicated from Bozho original https://techblog.bozho.net/content-security-policy-nonce-with-spring-security/

Content-Security-Policy is important for web security. Yet, it’s not mainstream yet, it’s syntax is hard, it’s rather prohibitive and tools rarely have flexible support for it.

While Spring Security does have a built-in Content Security Policy (CSP) configuration, it allows you to specify the policy a a string, not build it dynamically. And in some cases you need more than that.

In particular, CSP discourages the user of inline javascript, because it introduces vulnerabilities. If you really need it, you can use unsafe-inline but that’s a bad approach, as it negates the whole point of CSP. The alternative presented on that page is to use hash or nonce.

I’ll explain how to use nonce with spring security, if you are using .and().headers().contentSecurityPolicy(policy). The policy string is static, so you can’t generate a random nonce for each request. And having a static nonce is useless. So first, you define a CSP nonce filter:

public class CSPNonceFilter extends GenericFilterBean {
    private static final int NONCE_SIZE = 32; //recommended is at least 128 bits/16 bytes
    private static final String CSP_NONCE_ATTRIBUTE = "cspNonce";

    private SecureRandom secureRandom = new SecureRandom();

    @Override
    public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
        HttpServletRequest request = (HttpServletRequest) req;
        HttpServletResponse response = (HttpServletResponse) res;

        byte[] nonceArray = new byte[NONCE_SIZE];

        secureRandom.nextBytes(nonceArray);

        String nonce = Base64.getEncoder().encodeToString(nonceArray);
        request.setAttribute(CSP_NONCE_ATTRIBUTE, nonce);

        chain.doFilter(request, new CSPNonceResponseWrapper(response, nonce));
    }

    /**
     * Wrapper to fill the nonce value
     */
    public static class CSPNonceResponseWrapper extends HttpServletResponseWrapper {
        private String nonce;

        public CSPNonceResponseWrapper(HttpServletResponse response, String nonce) {
            super(response);
            this.nonce = nonce;
        }

        @Override
        public void setHeader(String name, String value) {
            if (name.equals("Content-Security-Policy") && StringUtils.isNotBlank(value)) {
                super.setHeader(name, value.replace("{nonce}", nonce));
            } else {
                super.setHeader(name, value);
            }
        }

        @Override
        public void addHeader(String name, String value) {
            if (name.equals("Content-Security-Policy") && StringUtils.isNotBlank(value)) {
                super.addHeader(name, value.replace("{nonce}", nonce));
            } else {
                super.addHeader(name, value);
            }
        }
    }
}

And then you configure it with spring security using: .addFilterBefore(new CSPNonceFilter(), HeaderWriterFilter.class).

The policy string should containt `nonce-{nonce}` which would get replaced with a random nonce on each request.

The filter is set before the HeaderWriterFilter so that it can wrap the response and intercept all calls to setting headers. Why it can’t be done by just overriding the headers after they are set by the HeaderWriterFiilter, using response.setHeader(..) – because the response is already committed and overriding does nothing.

Then in your pages where you for some reason need inline scripts, you can use:

<script nonce="{{ cspNonce }}">...</script>

(I’m using the Pebble template syntax; but you can use any template to output the request attribute “csp-nonce”)

Once again, inline javascript is rarely a good idea, but sometimes it’s necessary, at least temporarily – if you are adding a CSP to a legacy application, for example, and can’t rewrite everything).

We should have CSP everywhere, but building the policy should be aided by the frameworks we use, otherwise it’s rather tedious to write a proper policy that doesn’t break your application and is secure at the same time.

The post Content-Security-Policy Nonce with Spring Security appeared first on Bozho's tech blog.

Releasing Often Helps With Analyzing Performance Issues

Post Syndicated from Bozho original https://techblog.bozho.net/releasing-often-helps-with-analyzing-performance-issues/

Releasing often is a good thing. It’s cool, and helps us deliver new functionality quickly, but I want to share one positive side-effect – it helps with analyzing production performance issues.

We do releases every 5 to 10 days and after a recent release, the application CPU chart jumped twice (the lines are differently colored because we use blue-green deployment):

What are the typical ways to find performance issues with production loads?

  • Connect a profiler directly to production – tricky, as it requires managing network permissions and might introduce unwanted overhead
  • Run performance tests against a staging or local environment and do profiling there – good, except your performance tests might not hit exactly the functionality that causes the problem (this is what happens in our case, as it was some particular types of API calls that caused it, which weren’t present in our performance tests). Also, performance tests can be tricky
  • Do a thread dump (and heap dump) and analyze them locally – a good step, but requires some luck and a lot of experience analyzing dumps, even if equipped with the right tools
  • Check your git history / release notes for what change might have caused it – this is what helped us resolve the issue. And it was possible because there were only 10 days of commits between the releases.

We could go through all of the commits and spot potential performance issues. Most of them turned out not to be a problem, and one seemingly unproblematic pieces was discovered to be the problem after commenting it out for a brief period a deploying a quick release without it, to test the hypothesis. I’ll share a separate post about the particular issue, but we would have to waste a lot more time if that release has 3 months worth of commits rather than 10 days.

Sometimes it’s not an obvious spike in the CPU or memory, but a more gradual issue that you introduce at some point and it starts being a problem a few months later. That’s what happened a few months ago, when we noticed a stead growth in the CPU with the growth of ingested data. Logical in theory, but the CPU usage grew faster than the data ingestion rate, which isn’t good.

So we were able to answer the question “when did it start growing” in order to be able to pinpoint the release that introduced the issue. because the said release had only 5 days of commits, it was much easier to find the culprit.

All of the above techniques are useful and should be employed at the right time. But releasing often gives you a hand with analyzing where a performance issues is coming from.

The post Releasing Often Helps With Analyzing Performance Issues appeared first on Bozho's tech blog.

Syntactic Sugar Is Not Always Good

Post Syndicated from Bozho original https://techblog.bozho.net/syntactic-sugar-is-not-always-good/

This write-up is partly inspired by a recent post by Vlad Mihalcea on LinkedIn about the recently introduced text blocks in Java. More about them can be read here.

Now, that’s a nice feature. I’ve used it in Scala several years ago, and other languages also have it, so it seems like a no-brainer to introduce it in Java.

But, syntactic sugar (please don’t argue whether that’s precisely syntactic sugar or not) can be problematic and lead to “syntactic diabetes”. It has two possible issues.

The less important one is consistency – if you can do one thing in multiple, equally valid ways, that introduces inconsistency in the code and pointless arguments of “the right way to do things”. In this context – for 2-line strings do you use a text block or not? Should you do multi-line formatting for simple strings or not? Should you configure checkstyle rules to reject one or the other option and in what circumstances?

The second, and bigger problem, is code readability. I know it sounds counter-intuitive, but bear with me. The example that Vlad gave illustrates that – do you want to have a 20-line SQL query in your Java code? I’d say no – you’d better extract that to a separate file, and using some form of templating, populate the right values. This is certainly readable when you browse the code:

String query = QueryUtils.loadQuery("age-query.sql", timestamp, diff, other);
// or
String query = QueryUtils.loadQuery("age-query.sql", 
       Arrays.asList("param1Name", "param2Name"), 
      Arrays.asList(param1Value, param2Value);

Queries can be placed in /src/main/resources and loaded as templates (and cached) by the QueryUtils. And because of the previous lack of text blocks, you would not prefer ugly string concatenated queries inside your code.

But now, with this feature, you are tempted to do that, because, well, it looks good. Same goes for Elasticsearch queries, for JSON templates and whatnot. With this “sugar” you have more incentive to just put them in the code, where they, arguably, make the code less readable. If you really have to debug the query, as opposed to assuming its semantics by its name and relying on a proper implementation, you can easily go to age-query.sql and work with it. Just like when you extract a private method with some implementation details so that it makes the calling method more readable and easy to follow.

Both problems have manifested themselves in my Scala experience, which I’ve summed up in my talk “Scala – the good, the bad and the very ugly” (only slides available). Scala allows you to express things nicely and in multiple ways, which leads to inconsistencies and horrible code readability in some cases.

Counter intuitively, sometimes the syntactic improvements may be worse for code readability. Because they introduce complexity and because they make it easier to do the wrong thing.

That’s not a universal complaint, and certainly syntactic sugar is needed – you don’t have to write List<String> list = new ArrayList<String>() if you can use the diamond operator. But each such feature should be well thought not just for how easy it makes to write the code, but also how easy it is to read it, and more importantly - what type of code it incentivizes.

The post Syntactic Sugar Is Not Always Good appeared first on Bozho's tech blog.

Creating a CentOS Startup Screen

Post Syndicated from Bozho original https://techblog.bozho.net/creating-a-centos-startup-screen/

When distributing bundled software, you have multiple options, but if we exclude fancy newcomers like Docker and Kubernetes, you’re left with the following options: an installer (for Windows), a package (rpm or deb) for the corresponding distro, tarball with a setup shell script that creates the necessary configurations, and a virtual machine (or virtual appliance).

All of these options are applicable in different scenarios, but distributing a ready-to-deploy virtual machine image is considered standard for enterprise software. Your machine has all the dependencies it needs (because it might not be permitted to connect to the interenet), and it just has to be fired up.

But typically you’d want some initial configuration or at least have the ability to show the users how to connect to your (typically web-based) application. And so creating a startup screen is what many companies choose to do. Below is a simple way to do that on CentOS, which is my distro of preference. (There are other resources on the topic, e.g. this one, but it relies on /etc/inittab which is deprecated in CentOS 8).

useradd startup -m
yum -y install dialog

sed -i -- "s/-o '-p -- \\u' --noclear/--autologin startup --noclear/g" /usr/lib/systemd/system/[email protected]

chmod +x /install/startup.sh
echo "exec /install/startup.sh" >> /home/startup/.bashrc

systemctl daemon-reload

With the code above you are creating a startup user and auto-logging that user in before the regular login prompt. Replacing the Exec like in the [email protected] is doing exactly that.

Then the script adds the invocation of a startup bash script to the .bashrc which gets run when the user is logged in. What this script does is entirely up to you, below is a simple demo using the dialog command (that we just installed above):

#!/bin/sh
# Based on https://askubuntu.com/questions/1705/how-can-i-create-a-select-menu-in-a-shell-script

HEIGHT=15
WIDTH=70
CHOICE_HEIGHT=4
BACKTITLE="Your Company"
TITLE="Your Product setp"
BIND_IP=`ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'`
INFO="Welcome to MyProduct.\n\n\nWeb access URL: https://$BIND_IP\n\n\n\nFor more information visit https://docs.example.com"

CHOICE=$(dialog --clear \
                --backtitle "$BACKTITLE" \
                --title "$TITLE" \
                --msgbox "$INFO" \
                $HEIGHT $WIDTH \
                2>&1 >/dev/tty)

clear
echo 'Enter password for user "root":'
su root

This dialog shows just some basic information, but you can extend it to allow users making choices and input some parameters. More importantly, it gets the current IP address and shows it to the user. That’s not something they can’t do themselves in other ways, but it’s friendlier to show it like that. And you can’t hard-code that, because in each installation it will have a different IP (even if not using DHCP, you should let the user set the static IP that they’ve assigned rather than forcing one on them). At the end of the script it switches to the root user.

Security has to be considered here – your startup user should not be allowed to do anything meaningful in the system, because it is automatically logged in without password. According to this answer exec sort-of solves that problem (e.g. when you mistype the root password, you are back to the startup.sh script rather than to the console).

I agree that’s a rare use-case but I thought I’d share this “arcane” knowledge.

The post Creating a CentOS Startup Screen appeared first on Bozho's tech blog.

My Advice To Developers About Working With Databases: Make It Secure

Post Syndicated from Bozho original https://techblog.bozho.net/my-advice-to-developers-about-working-with-databases-make-it-secure/

Last month Ben Brumm asked me for the one advice I’d like to give to developers that are working with databases (in reality – almost all of us). He published mine as well as many others’ answers here, but I’d like to share it with my readers as well.

If I had to give developers working with databases one advice, it would be: make it secure. Every other thing you’ll figure in time – how to structure your tables, how to use ORM, how to optimize queries, how to use indexes, how to do multitenancy. But security may not be on the list of requirements and it may be too late when the need becomes obvious.

So I’d focus on several things:

  • Prevent SQL injections – make sure you use an ORM or prepared statements rather than building queries with string concatenation. Otherwise a malicious actor can inject anything in your queries and turn them into a DROP DATABASE query, or worse – one that exfiltrates all the data.
  • Support encryption in transit – this often has to be supported by the application’s driver configuration, e.g. by trusting a particular server certificate. Unencrypted communication, even within the same datacenter, is a significant risk and that’s why databases support encryption in transit. (You should also think about encryption at rest, but that’s more of an Ops task)
  • Have an audit log at the application level – “who did what” is a very important question from a security and compliance point of view. And no native database functionality can consistently answer the question “who” – it’s the application that manages users. So build an audit trail layer that records who did what changes to what entities/tables.
  • Consider record-level encryption for sensitive data – a database can be dumped in full by those who have access (or gain access maliciously). This is how data breaches happen. Sensitive data (like health data, payment data, or even API keys, secrets or tokens) benefits from being encrypted with an application-managed key, so that access to the database alone doesn’t reveal that data. Another option, often used for credit cards, is tokenization, which shifts the encryption responsibility to the tokenization providers. Managing the keys is hard, but even a basic approach is better than nothing.

Security is often viewed as an “operations” responsibility, and this has lead to a lot of tools that try to solve the above problem without touching the application – web application firewalls, heuristics for database access monitoring, trying to extract the current user, etc. But the application is the right place for many of these protections (although certainly not the only place), and as developers we need to be aware of the risks and best practices.

The post My Advice To Developers About Working With Databases: Make It Secure appeared first on Bozho's tech blog.

Discovering an OSSEC/Wazuh Encryption Issue

Post Syndicated from Bozho original https://techblog.bozho.net/discovering-an-ossec-wazuh-encryption-issue/

I’m trying to get the Wazuh agent (a fork of OSSEC, one of the most popular open source security tools, used for intrusion detection) to talk to our custom backend (namely, our LogSentinel SIEM Collector) to allow us to reuse the powerful Wazuh/OSSEC functionalities for customers that want to install an agent on each endpoint rather than just one collector that “agentlessly” reaches out to multiple sources.

But even though there’s a good documentation on the message format and encryption, I couldn’t get to successfully decrypt the messages. (I’ll refer to both Wazuh and OSSEC, as the functionality is almost identical in both, with the distinction that Wazuh added AES support in addition to blowfish)

That lead me to a two-day investigation on possible reasons. The first side-discovery was the undocumented OpenSSL auto-padding of keys and IVs described in my previous article. Then it lead me to actually writing C code (an copying the relevant Wazuh/OSSEC pieces) in order to debug the issue. With Wazuh/OSSEC I was generating one ciphertext and with Java and openssl CLI – a different one.

I made sure the key, key size, IV and mode (CBC) are identical. That they are equally padded and that OpenSSL’s EVP API is correctly used. All of that was confirmed and yet there was a mismatch, and therefore I could not decrypt the Wazuh/OSSEC message on the other end.

After discovering the 0-padding, I also discovered a mistake in the documentation, which used a static IV of FEDCA9876543210 rather than the one found in the code, where the 0 preceded 9 – FEDCA0987654321. But that didn’t fix the issue either, only got me one step closer.

A side-note here on IVs – Wazuh/OSSEC is using a static IV, which is a bad practice. The issue is reported 5 years ago, but is minor, because they are using some additional randomness per message that remediates the use of a static IV; it’s just not idiomatic to do it that way and may have unexpected side-effects.

So, after debugging the C code, I got to a simple code that could be used to reproduce the issue and asked a question on Stackoverflow. 5 minutes after posting the question I found another, related question that had the answer – using hex strings like that in C doesn’t work. Instead, they should be encoded: char *iv = (char *)"\xFE\xDC\xBA\x09\x87\x65\x43\x21\x00\x00\x00\x00\x00\x00\x00\x00";. So, the value is not the bytes corresponding to the hex string, but the ASCII codes of each character in the hex string. I validated that in the receiving Java end with this code:

This has an implication on the documentation, as well as on the whole scheme as well. Because the Wazuh/OSSEC AES key is: MD5(password) + MD5(MD5(agentName) + MD5(agentID)){0, 15}, the 2nd part is practically discarded, because the MD5(password) is 32 characters (= 32 ASCII codes/bytes), which is the length of the AES key. This makes the key derived from a significantly smaller pool of options – the permutations of 16 bytes, rather than of 256 bytes.

I raised an issue with Wazuh. Although this can be seen as a vulnerability (due to the reduced key space), it’s rather minor from security point of view, and as communication is mostly happening within the corporate network, I don’t think it has to be privately reported and fixed immediately.

Yet, I made a recommendation for introducing an additional configuration option to allow to transition to the updated protocol without causing backward compatibility issues. In fact, I’d go further and recommend using TLS/DTLS rather than a home-grown, AES-based scheme. Mutual authentication can be achieved through TLS mutual authentication rather than through a shared secret.

It’s satisfying to discover issues in popular software, especially when they are not written in your “native” programming language. And as a rule of thumb – encodings often cause problems, so we should be extra careful with them.

The post Discovering an OSSEC/Wazuh Encryption Issue appeared first on Bozho's tech blog.

OpenSSL Key and IV Padding

Post Syndicated from Bozho original https://techblog.bozho.net/openssl-key-and-iv-padding/

OpenSSL is an omnipresent tool when it comes to encryption. While in Java we are used to the native Java implementations of cryptographic primitives, most other languages rely on OpenSSL.

Yesterday I was investigating the encryption used by one open source tool written in C, and two things looked strange: they were using a 192 bit key for AES 256, and they were using a 64-bit IV (initialization vector) instead of the required 128 bits (in fact, it was even a 56-bit IV).

But somehow, magically, OpenSSL didn’t complain the way my Java implementation did, and encryption worked. So, I figured, OpenSSL is doing some padding of the key and IV. But what? Is it prepending zeroes, is it appending zeroes, is it doing PKCS padding or ISO/IEC 7816-4 padding, or any of the other alternatives. I had to know if I wanted to make my Java counterpart supply the correct key and IV.

It was straightforward to test with the following commands:

# First generate the ciphertext by encrypting input.dat which contains "testtesttesttesttesttest"
$ openssl enc -aes-256-cbc -nosalt -e -a -A -in input.dat -K '7c07f68ea8494b2f8b9fea297119350d78708afa69c1c76' -iv 'FEDCBA987654321' -out input-test.enc

# Then test decryption with the same key and IV
$ openssl enc -aes-256-cbc -nosalt -d -a -A -in input-test.enc -K '7c07f68ea8494b2f8b9fea297119350d78708afa69c1c76' -iv 'FEDCBA987654321'
testtesttesttesttesttest

# Then test decryption with different paddings
$ openssl enc -aes-256-cbc -nosalt -d -a -A -in input-test.enc -K '7c07f68ea8494b2f8b9fea297119350d78708afa69c1c76' -iv 'FEDCBA9876543210'
testtesttesttesttesttest

$ openssl enc -aes-256-cbc -nosalt -d -a -A -in input-test.enc -K '7c07f68ea8494b2f8b9fea297119350d78708afa69c1c760' -iv 'FEDCBA987654321'
testtesttesttesttesttest

$ openssl enc -aes-256-cbc -nosalt -d -a -A -in input-test.enc -K '7c07f68ea8494b2f8b9fea297119350d78708afa69c1c76000' -iv 'FEDCBA987654321'
testtesttesttesttesttest

$ openssl enc -aes-256-cbc -nosalt -d -a -A -in input-test.enc -K '07c07f68ea8494b2f8b9fea297119350d78708afa69c1c76' -iv 'FEDCBA987654321'
bad decrypt

So, OpenSSL is padding keys and IVs with zeroes until they meet the expected size. Note that if -aes-192-cbc is used instead of -aes-256-cbc, decryption will fail, because OpenSSL will pad it with fewer zeroes and so the key will be different.

Not an unexpected behavaior, but I’d prefer it to report incorrect key sizes rather than “do magic”, especially when it’s not easy to find exactly what magic it’s doing. I couldn’t find it documented, and the comments to this SO question hint in the same direction. In fact, for plaintext padding, OpenSSL uses PKCS padding (which is documented), so it’s extra confusing that it’s using zero-padding here.

In any case, follow the advice from the stackoverflow answer and don’t rely on this padding – always provide the key and IV in the right size.

The post OpenSSL Key and IV Padding appeared first on Bozho's tech blog.

ElasticSearch Multitenancy With Routing

Post Syndicated from Bozho original https://techblog.bozho.net/elasticsearch-multitenancy-with-routing/

Elasticsearch is great, but optimizing it for high load is always tricky. This won’t be yet another “Tips and tricks for optimizing Elasticsearch” article – there are many great ones out there. I’m going to focus on one narrow use-case – multitenant systems, i.e. those that support multiple customers/users (tenants).

You can build a multitenant search engine in three different ways:

  • Cluster per tenant – this is the hardest to manage and requires a lot of devops automation. Depending on the types of customers it may be worth it to completely isolate them, but that’s rarely the case
  • Index per tenant – this can be fine initially, and requires little additional coding (you just parameterize the “index” parameter in the URL of the queries), but it’s likely to cause problems as the customer base grows. Also, supporting consistent mappings and settings across indexes may be trickier than it sounds (e.g. some may reject an update and others may not depending on what’s indexed). Moving data to colder indexes also becomes more complex.
  • Tenant-based routing – this means you put everything in one cluster but you configure your search routing to be tenant-specific, which allows you to logically isolate data within a single index.

The last one seems to be the preferred option in general. What is routing? The Elasticsearch blog has a good overview and documentation. The idea lies in the way Elasticsearch handles indexing and searching – it splits data into shards (each shard is a separate Lucene index and can be replicated on more than one node). A shard is a logical grouping within a single Elasticsearch node. When no custom routing is used, and an index request comes, the ID is used to determine which shard is going to be used to store the data. However, during search, Elasticsearch doesn’t know which shards have the data, so it has ask multiple shards and gather the results. Related to that, there’s the newly introduced adaptive replica selection, where the proper shard replica is selected intelligently, rather than using round-robin.

Custom routing allows you to specify a routing value when indexing a document and then a search can be directed only to the shard that has the same routing value. For example, at LogSentinel when we index a log entry, we use the data source id (applicationId) for routing. So each application (data source) that generates logs has a separate identifier which allows us to query only that data source’s shard. That way, even though we may have a thousand clients with a hundred data sources each, a query will be precisely targeted to where the data for that particular customer’s data source lies.

This is key for horizontally scaling multitenant applications. When there’s terabytes of data and billions of documents, many shards will be needed (in order to avoid large and heavy shards that cause performance issues). Finding data in this haystack requires the ability to know where to look.

Note that you can (and probably should) make routing required in these cases – each indexed document must be required to have a routing key, otherwise an implementation oversight may lead to a slow index.

Using custom routing you are practically turning one large Elasticsearch cluster into smaller sections, logically separated based on meaningful identifiers. In our case, it is not a userId/customerId, but one level deeper – there are multiple shards per customer, but depending on the use-case, it can be one shard per customer, using the userId/customerId. Using more than one shard per customer may complicate things a little – for example having too many shards per customer may require searches that span too many shards, but that’s not necessarily worse than not using routing.

There are some caveats – the isolation of customer data has to be handled in the application layer (whereas for the first two approaches data is segregated operationally). If there’s an application bug or lack of proper access checks, one user can query data from other users’ shards by specifying their routing key. It’s the role of the application in front of Elasticsearch to only allow queries with routing keys belonging to the currently authenticated user.

There are cases when the first two approaches to multitenancy are viable (e.g. a few very large customers), but in general the routing approach is the most scalable one.

The post ElasticSearch Multitenancy With Routing appeared first on Bozho's tech blog.

Bulk vs Individual Compression

Post Syndicated from Bozho original https://techblog.bozho.net/bulk-vs-individual-compression/

I’d like to share something very brief and very obvious – that compression works better with large amounts of data. That is, if you have to compress 100 sentences you’d better compress them in bulk rather than once sentence at a time. Let me illustrate that:

public static void main(String[] args) throws Exception {
    List<String> sentences = new ArrayList<>();
    for (int i = 0; i < 100; i ++) {
        StringBuilder sentence = new StringBuilder();
        for (int j = 0; j < 100; j ++) { 
          sentence.append(RandomStringUtils.randomAlphabetic(10)).append(" "); 
        } 
        sentences.add(sentence.toString()); 
    } 
    byte[] compressed = compress(StringUtils.join(sentences, ". ")); 
    System.out.println(compressed.length); 
    System.out.println(sentences.stream().collect(Collectors.summingInt(sentence -> compress(sentence).length)));
}

The compress method is using commons-compress to easily generate results for multiple compression algorithms:

public static byte[] compress(String str) {
   if (str == null || str.length() == 0) {
       return new byte[0];
   }
   ByteArrayOutputStream out = new ByteArrayOutputStream();
   try (CompressorOutputStream gzip = new CompressorStreamFactory()
           .createCompressorOutputStream(CompressorStreamFactory.GZIP, out)) {
       gzip.write(str.getBytes("UTF-8"));
       gzip.close();
       return out.toByteArray();
   } catch (Exception ex) {
       throw new RuntimeException(ex);
   }
}

The results are as follows, in bytes (note that there’s some randomness, so algorithms are not directly comparable):

Algorithm Bulk Individual
GZIP 6590 10596
LZ4_FRAMED 9214 10900
BZIP2 6663 12451

Why is that an obvious result? Because of the way most compression algorithms work – they look for patterns in the raw data and create a map of those patterns (a very rough description).

How is that useful? In big data scenarios where the underlying store supports per-record compression (e.g. a database or search engine), you may save a significant amount of disk space if you bundle multiple records into one stored/indexed record.

This is not a generically useful advice, though. You should check the particular datastore implementation. For example MS SQL Server supports both row and page compression. Cassandra does compression on an SSTable level, so it may not matter how you structure your rows. Certainly, if storing data in files, storing it in one file and compressing it is more efficient than compressing multiple files separately.

Disk space is cheap so playing with data bundling and compression may be seen as premature optimization. But in systems that operate on large datasets it’s a decision that can save you a lot of storage costs.

The post Bulk vs Individual Compression appeared first on Bozho's tech blog.