December 31st, 2021
Developer Roll Up: December 2021
Christopher Luft
We are squeaking this one in under the wire as we say goodbye to the second full year in this long pandemic. It has been a busy year with lots of new features and improvements.
To get the new year started right we are putting on a joint webinar with our friends at Tines. Join us on Tuesday January 18, 2022 for No BS security: detect and automate with LimaCharlie and Tines. You can register for the event here.
As we go into 2022 we do so with hope and optimism that we will start moving towards a brighter future for all.
Sensor v4.25.5
Fixes stability issues with running on hardened/customized versions of Linux.
Fixes rare deadlocks when unloading a sensor.
Enhances the performance of the Process List (os_processes) and Network (netstat) views in the webapp for Windows and macOS (Linux will soon follow with eBPF support). This is done by better caching on the sensor. Initial listing request when a sensor starts will still have a cold-start that can take up to a minute, follow on listings will be much quicker.
Web-app Updates
Recent UI changes:
The Timeline of a sensor's events has received some performance improvements as well as a facelift: there's so much more space in there to look at the content of events. We made the tree view optional, enabling Timeline to be used as a more general log viewer for ingested logs from external sources (GCP, 1Password, etc.) We've also removed the default event filters when you visit Timeline, so it's up to you how you want to narrow your search.
The list of D&R Rules will look a little different (no more paragraphs of text next to it) and will also be a lot faster if you have a large number of rules.
Same as above with False Positive Rules.
Added a confirmation step when closing most dialogs to make sure we don't accidentally lose input.
Several other small bug fixes and tweaks.
Usage & Billing Redesign
In order to improve the user experience we have designed two new pages from scratch: Usage and Billing.
Front and center to the Usage page is the org's Quota Rate and Metered Usage. Quota Rate is calculated as (base cost per sensor + add-ons cost per sensor) * quota. Metered Usage sums up pay-per-use features such as Replay or Artifact Collection. Adding Quota Rate + Metered Usage together should give a good estimate of how an org's bill is trending. The Billing page is where you can view / edit payment method and delete your org, if required.
Other highlights:
Chart to see peak online sensors vs your quota.
Chart + table breakdown of all metered usage.
It's now. possible to see if there's an existing payment method, including customers who are on unified billing
These updates should make it a lot easier to predict your bill. If you have any questions about how billing works, please let us know: we want to make our pricing as transparent as possible without making it overwhelming.
Also, in this release, the JSON viewers in the web app will now display new-lines when found in the JSON data, This means that doing a run --shell-command "dir c:\\windows" will now be visible in a much more readable state in the web app.
This makes the run --shell-command a lot more usable from the Console view.
Velociraptor Service
just released a beta velociraptor service.
This service will automate the deployment, running and collection of Velociraptor Artifacts.
It supports 3 actions:
List to show all built-in Artifacts the latest release of Velociraptor supports.
Show to display usage of a specific built-in Artifact.
Collect to trigger an actual collection of Artifacts.
The service requires the reliable-tasking service to be installed (so that Velociraptor can collect on large scale even if some endpoints are offline).
The service supports built-in Artifacts but also deploying custom Artifact YAML config files.
This beta does not have a custom web UI. To use it:
Subscribe to the service.
Go to the Service Request section.
Select the velociraptor tab.
Turn off the Run as background job toggle to see results immediately.
Select the action collect.
Select an artifact_name from the list.
Select a sensor (or tag) to identify where to collect from.
Doc: https://doc.limacharlie.io/docs/documentation/ZG9jOjMxMTU2NDU3-velociraptor