Bring on the documentation

Post Syndicated from Alasdair Allan original https://www.raspberrypi.org/blog/bring-on-the-documentation/

I joined Raspberry Pi eighteen months ago and spent my first year here keeping secrets and writing about Raspberry Silicon, and the chip that would eventually be known as RP2040. This is all (largely) completed work: Raspberry Pi Pico made its way out into the world back in January, and our own Raspberry Silicon followed last month.

The question is then, what have I done for you lately?

The Documentation

Until today our documentation for the “big” boards — as opposed to Raspberry Pi Pico — lived in a Github repository and was written in Github-flavoured Markdown. From there our documentation site was built from the Markdown source, which was pulled periodically from the repository, run through a script written many years ago which turned it into HTML, and then deployed onto our website.

This all worked really rather well in the early days of Raspberry Pi.

The old-style documentation

The documentation repository itself has been left to grow organically. When I arrived here, it needed to be restructured, and a great deal of non-Raspberry Pi specific documentation needed to be removed, while other areas were underserved and needed to be expanded. The documentation was created when there was a lot less third-party content around to support the Raspberry Pi, so a fair bit of it really isn’t that relevant anymore, and is better dealt with elsewhere on the web. And the structure was a spider’s web that, in places, made very little sense.

Frankly, it was all in a bit of a mess.

Enter the same team of folks that built the excellent PDF-based documentation for Raspberry Pi Pico and RP2040. The PDF documentation was built off an Asciidoc-based toolchain, and we knew from the outset that we’d want to migrate the Markdown-based documentation to Asciidoc. It’d offer us more powerful tools going forwards, and a lot more flexibility.

After working through the backlog of community pull requests, we took a snapshot of the current Markdown-based repository and built out a toolchain. A lot of which we intended to, and did, throw away after converting the Markdown to Asciidoc as our “source of truth.” This didn’t happen without a bit of a wrench; nobody throws working code away lightly. But it did mean we’d reached the point of no return.

The next generation of documentation

The result of our new documentation project launches today.

The new-look documentation

The new documentation site is built and deployed directly from the documentation repository using Github Actions when someone pushes to the master branch. However we’ll mostly be working on the develop branch in the repository, which is the default branch you’ll now get when you take a fresh checkout, and also the branch you should target for your pull requests.

We’ve always taken pull requests against the Markdown-based source behind our documentation site. Over the years as the documentation set has grown there have been hundreds of community contributors, who have made over 1,200 individual pull requests, ranging from fixing small typos, to contributing whole new sections.

With the introduction of the new site, we’re going to continue to take pull requests against the new Asciidoc-based documentation. However, we’re going to be a bit more targeted around what we’ll to accept into the documentation, and will be looking to keep the repository focussed on Raspberry Pi-specific things, rather than having generic Linux tutorial content.

The documentation itself will remain under a Creative Commons Attribution-Sharealike (CC BY-SA 4.0) license.

Product Information Portal

Supporting our customers in the best way we can when they build products around Raspberry Pi computers is important to us. A big part of this is being able to get customers access to the right documents easily. So alongside the new-look documentation, we have revamped how our customers (that’s you) get access to the documents you need for commercial applications of Raspberry Pi.

The Product Information Portal, or PIP as we’ve come to refer to it here at Pi Towers, is where documents such as regulatory paperwork, product change notices, and white papers will be stored and accessed from now on.

The new Product Information Portal (PIP)

PIP has three tiers of document type: those which are publicly available; restricted documents that require a customer to sign up for a free account; and confidential documents which require a customer’s company to enter into a confidentiality agreement with Raspberry Pi.

PIP will also be a way for customers to get updates on products, allowing customers with a user account to subscribe to products, and receive email updates should there be a product change, regulatory update, or white paper release.

The portal can be found at pip.raspberrypi.org and will be constantly updated as new documents become available.

Where next?

I’m hoping that everyone that has contributed to the documentation over the years will see the new site as a big step towards making our documentation more accessible – and, as ever, we accept pull requests. However, if you’re already a contributor, the easiest thing to do is to take a fresh checkout of the repository, because things have changed a lot today.

Big changes to the look-and-feel of the documentation site

This isn’t the end. Instead, it’s the beginning of a journey to try and pull together our documentation into something that feels a bit more cohesive. While the documentation set now looks, and feels, a lot better and is (I think) a lot easier to navigate if you don’t know it well, there is still a lot of pruning and re-writing ahead of me. But we’ve reached the stage where I’m happy, and want to, work on that in public so the community can see how things are changing and can help out.

The post Bring on the documentation appeared first on Raspberry Pi.

Defeating Microsoft’s Trusted Platform Module

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/defeating-microsofts-trusted-platform-module.html

This is a really interesting story explaining how to defeat Microsoft’s TPM in 30 minutes — without having to solder anything to the motherboard.

Researchers at the security consultancy Dolos Group, hired to test the security of one client’s network, received a new Lenovo computer preconfigured to use the standard security stack for the organization. They received no test credentials, configuration details, or other information about the machine.

They were not only able to get into the BitLocker-encrypted computer, but then use the computer to get into the corporate network.

It’s the “evil maid attack.” It requires physical access to your computer, but you leave it in your hotel room all the time when you go out to dinner.

Original blog post.

Practical Doomsday

Post Syndicated from Unknown original https://lcamtuf.blogspot.com/2021/08/practical-doomsday.html

 Practical Doomsday

Practical Doomsday is an enjoyable, data-packed romp through the world of rational emergency preparedness. It cuts through the noise of 24-hour news to help you zero in on what actually matters: building a diversified rainy-day fund, staying safe online, and dealing with common mishaps ranging from prolonged power outages to wildfires or floods.

The goal of the book is not to convince you that the end is nigh. To the contrary: I want to reclaim the concept of prepping from the bunker-dwelling prophets of doom. Disasters are not rare, but emergency preparedness is not about expecting the worst; it’s about being able to enjoy our lives to the fullest without worrying about the apocalyptic headline of the day.

☛ Click here to read a sample chapter
☛ Click to order from the publisher

Use code PREORDERDOOMSDAY to get 30% off. Preorders from the publisher get instant early access to a PDF version of the book. Orders can also be placed on Amazon, Barnes & Noble, and in most other places where books are sold.

CICD on Serverless Applications using AWS CodeArtifact

Post Syndicated from Anand Krishna original https://aws.amazon.com/blogs/devops/cicd-on-serverless-applications-using-aws-codeartifact/

Developing and deploying applications rapidly to users requires a working pipeline that accepts the user code (usually via a Git repository). AWS CodeArtifact was announced in 2020. It’s a secure and scalable artifact management product that easily integrates with other AWS products and services. CodeArtifact allows you to publish, store, and view packages, list package dependencies, and share your application’s packages.

In this post, I will show how we can build a simple DevOps pipeline for a sample JAVA application (JAR file) to be built with Maven.

Solution Overview

We utilize the following AWS services/Tools/Frameworks to set up our continuous integration, continuous deployment (CI/CD) pipeline:

The following diagram illustrates the pipeline architecture and flow:

 

aws-codeartifact-pipeline

 

Our pipeline is built on CodePipeline with CodeCommit as the source (CodePipeline Source Stage). This triggers the pipeline via a CloudWatch Events rule. Then the code is fetched from the CodeCommit repository branch (main) and sent to the next pipeline phase. This CodeBuild phase is specifically for compiling, packaging, and publishing the code to CodeArtifact by utilizing a package manager—in this case Maven.

After Maven publishes the code to CodeArtifact, the pipeline asks for a manual approval to be directly approved in the pipeline. It can also optionally trigger an email alert via Amazon Simple Notification Service (Amazon SNS). After approval, the pipeline moves to another CodeBuild phase. This downloads the latest packaged JAR file from a CodeArtifact repository and deploys to the AWS Lambda function.

Clone the Repository

Clone the GitHub repository as follows:

git clone https://github.com/aws-samples/aws-cdk-codeartifact-pipeline-sample.git

Code Deep Dive

After the Git repository is cloned, the directory structure is shown as in the following screenshot :

aws-codeartifact-pipeline-code

Let’s study the files and code to understand how the pipeline is built.

The directory java-events is a sample Java Maven project. Find numerous sample applications on GitHub. For this post, we use the sample application java-events.

To add your own application code, place the pom.xml and settings.xml files in the root directory for the AWS CDK project.

Let’s study the code in the file lib/cdk-pipeline-codeartifact-new-stack.ts of the stack CdkPipelineCodeartifactStack. This is the heart of the AWS CDK code that builds the whole pipeline. The stack does the following:

  • Creates a CodeCommit repository called ca-pipeline-repository.
  • References a CloudFormation template (lib/ca-template.yaml) in the AWS CDK code via the module @aws-cdk/cloudformation-include.
  • Creates a CodeArtifact domain called cdkpipelines-codeartifact.
  • Creates a CodeArtifact repository called cdkpipelines-codeartifact-repository.
  • Creates a CodeBuild project called JarBuild_CodeArtifact. This CodeBuild phase does all of the code compiling, packaging, and publishing to CodeArtifact into a repository called cdkpipelines-codeartifact-repository.
  • Creates a CodeBuild project called JarDeploy_Lambda_Function. This phase fetches the latest artifact from CodeArtifact created in the previous step (cdkpipelines-codeartifact-repository) and deploys to the Lambda function.
  • Finally, creates a pipeline with four phases:
    • Source as CodeCommit (ca-pipeline-repository).
    • CodeBuild project JarBuild_CodeArtifact.
    • A Manual approval Stage.
    • CodeBuild project JarDeploy_Lambda_Function.

 

CodeArtifact shows the domain-specific and repository-specific connection settings to mention/add in the application’s pom.xml and settings.xml files as below:

aws-codeartifact-repository-connections

Deploy the Pipeline

The AWS CDK code requires the following packages in order to build the CI/CD pipeline:

  • @aws-cdk/core
  • @aws-cdk/aws-codepipeline
  • @aws-cdk/aws-codepipeline-actions
  • @aws-cdk/aws-codecommit
  • @aws-cdk/aws-codebuild
  • @aws-cdk/aws-iam
  • @aws-cdk/cloudformation-include

 

Install the required AWS CDK packages as below:

npm i @aws-cdk/core @aws-cdk/aws-codepipeline @aws-cdk/aws-codepipeline-actions @aws-cdk/aws-codecommit @aws-cdk/aws-codebuild @aws-cdk/pipelines @aws-cdk/aws-iam @ @aws-cdk/cloudformation-include

Compile the AWS CDK code:

npm run build

Deploy the AWS CDK code:

cdk synth
cdk deploy

After the AWS CDK code is deployed, view the final output on the stack’s detail page on the AWS CloudFormation :

aws-codeartifact-pipeline-cloudformation-stack

 

How the pipeline works with artifact versions (using SNAPSHOTS)

In this demo, I publish SNAPSHOT to the repository. As per the documentation here and here, a SNAPSHOT refers to the most recent code along a branch. It’s a development version preceding the final release version. Identify a snapshot version of a Maven package by the suffix SNAPSHOT appended to the package version.

The application settings are defined in the pom.xml file. For this post, we define the following:

  • The version to be used, called 1.0-SNAPSHOT.
  • The specific packaging, called jar.
  • The specific project display name, called JavaEvents.
  • The specific group ID, called JavaEvents.

The screenshot below shows the pom.xml settings we utilised in the application:

aws-codeartifact-pipeline-pom-xml

 

You can’t republish a package asset that already exists with different content, as per the documentation here.

When a Maven snapshot is published, its previous version is preserved in a new version called a build. Each time a Maven snapshot is published, a new build version is created.

When a Maven snapshot is published, its status is set to Published, and the status of the build containing the previous version is set to Unlisted. If you request a snapshot, the version with status Published is returned. This is always the most recent Maven snapshot version.

For example, the image below shows the state when the pipeline is run for the FIRST RUN. The latest version has the status Published and previous builds are marked Unlisted.

aws-codeartifact-repository-package-versions

 

For all subsequent pipeline runs, multiple Unlisted versions will occur every time the pipeline is run, as all previous versions of a snapshot are maintained in its build versions.

aws-codeartifact-repository-package-versions

 

Fetching the Latest Code

Retrieve the snapshot from the repository in order to deploy the code to an AWS Lambda Function. I have used AWS CLI to list and fetch the latest asset of package version 1.0-SNAPSHOT.

 

Listing the latest snapshot

export ListLatestArtifact = `aws codeartifact list-package-version-assets —domain cdkpipelines-codeartifact --domain-owner $Account_Id --repository cdkpipelines-codeartifact-repository --namespace JavaEvents --format maven --package JavaEvents --package-version "1.0-SNAPSHOT"| jq ".assets[].name"|grep jar|sed ’s/“//g’`

NOTE : Please note the dynamic CDK variable $Account_Id which represents AWS Account ID.

 

Fetching the latest code using Package Version

aws codeartifact get-package-version-asset --domain cdkpipelines-codeartifact --repository cdkpipelines-codeartifact-repository --format maven --package JavaEvents --package-version 1.0-SNAPSHOT --namespace JavaEvents --asset $ListLatestArtifact demooutput

Notice that I’m referring the last code by using variable $ListLatestArtifact. This always fetches the latest code, and demooutput is the outfile of the AWS CLI command where the content (code) is saved.

 

Testing the Pipeline

Now clone the CodeCommit repository that we created with the following code:

git clone https://git-codecommit.<region>.amazonaws.com/v1/repos/codeartifact-pipeline-repository

 

Enter the following code to push the code to the CodeCommit repository:

cp -rp cdk-pipeline-codeartifact-new /* ca-pipeline-repository
cd ca-pipeline-repository
git checkout -b main
git add .
git commit -m “testing the pipeline”
git push origin main

Once the code is pushed to Git repository, the pipeline is automatically triggered by Amazon CloudWatch events.

The following screenshots shows the second phase (AWS CodeBuild Phase – JarBuild_CodeArtifact) of the pipeline, wherein the asset is successfully compiled and published to the CodeArtifact repository by Maven:

aws-codeartifact-pipeline-codebuild-jarbuild

aws-codeartifact-pipeline-codebuild-screenshot

aws-codeartifact-pipeline-codebuild-screenshot2

 

The following screenshots show the last phase (AWS CodeBuild Phase – Deploy-to-Lambda) of the pipeline, wherein the latest asset is successfully pulled and deployed to AWS Lambda Function.

Asset JavaEvents-1.0-20210618.131629-5.jar is the latest snapshot code for the package version 1.0-SNAPSHOT. This is the same asset version code that will be deployed to AWS Lambda Function, as seen in the screenshots below:

aws-codeartifact-pipeline-codebuild-jardeploy

aws-codeartifact-pipeline-codebuild-screenshot-jarbuild

The following screenshot of the pipeline shows a successful run. The code was fetched and deployed to the existing Lambda function (codeartifact-test-function).

aws-codeartifact-pipeline-codepipeline

Cleanup

To clean up, You can either delete the entire stack through the AWS CloudFormation console or use AWS CDK command like below –

cdk destroy

For more information on the AWS CDK commands, please check the here or sample here.

Summary

In this post, I demonstrated how to build a CI/CD pipeline for your serverless application with AWS CodePipeline by utilizing AWS CDK with AWS CodeArtifact. Please check the documentation here for an in-depth explanation regarding other package managers and the getting started guide.

Metasploit Wrap-Up

Post Syndicated from Matthew Kienow original https://blog.rapid7.com/2021/08/06/metasploit-wrap-up-124/

Desert heat (not the 1999 film)

Metasploit Wrap-Up

This week was more quiet than normal with Black Hat USA and DEF CON, but that didn’t stop the team from delivering some small enhancements and bug fixes! We are also excited to see two new modules #15519 and #15520 from researcher Jacob Baines’ DEF CON talk ​​Bring Your Own Print Driver Vulnerability already appear in the PR queue. Keep an eye out for those modules in the near future!

Our very own Simon Janusz enhanced the CommandDispatcher and SessionManager to support using a negative ID with both the jobs and sessions commands. Quickly access the last job or session by passing -1 to the command. The change allows users to upgrade the most recently opened session to meterpreter using the command sessions -u -1, thus removing the need to run the post/multi/manage/shell_to_meterpreter module.

In addition, our very own Alan David Foster updated the PostgreSQL scanner/postgres/postgres_schemadump module so that it does not ignore the default postgres database. That default database might contain valuable information after all! The enhancements also introduce a new datastore option, IGNORED_DATABASES, to configure a list of databases ignored during the schema dump.

Enhancements and features

  • #15492 from sjanusz-r7 – Adds support for negative session and job ids.
  • #15498 from adfoster-r7 – Updates the PostgreSQL schema_dump module to no longer ignore the default postgres database which may contain useful information, and adds a new datastore option to configure ignored databases.

Bugs fixed

  • #15500 from agalway-r7 – Fixes a regression issue for gitlab_file_read_rce and cacti_filter_sqli_rce where the modules failed to run
  • #15503 from jheysel-r7 – A bug has been fixed in the Cisco Hyperflex file upload RCE module that prevented it from properly deleting the uploaded payload files. Uploaded payload files should now be properly deleted.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close