Security Metrics That Matter
We measure so that we can improve and report. Reporting is for our bosses and job security. Improvement is for us. As an outnumbered security professional you will never, ever have enough time, money and resources to add every layer of defence you wish you could, which means we need to work smarter. Learn about which metrics truly matter, and which vanity metrics you can learn to safely ignore, so that you can work the most effectively at protecting your organization.
We measure so that we can improve and report. Reporting is for our bosses and job security. Improvement is for us. As an outnumbered security professional you will never, ever have enough time, money and resources to add every layer of defence you wish you could, which means we need to work smarter. Learn about which metrics truly matter, and which vanity metrics you can learn to safely ignore, so that you can work the most effectively at protecting your organization.
Full Circle Detection: From Hunting to Actionable Detection
How do you create new efficient, accurate, resilient detection rules? There is a lot of steps to follow. This talk will take you to what I call Full Circle Detection. Starting with where to get hunting ideas to giving a turnkey alerts for your Security Analysts using a real world step by step example.
In this talk the audience will see how a simple blog article (about an Outlook Persistence technique) can and should spark a whole chain of action from your security team.
For each of the applicable steps below, sample code will be provided.
1. The idea/hypothesis
○ You read a good blog on an technic and you hunt for the IOC
2. Converting the hunt query/analytics into detection in your SIEM
○ Nobody wants to run the same search over and over again
3. Make sure your detection is working
○ It's not because your query is good that you will find events
○ Make a Atomic Red Team (ART) test to mimic the attack on a test server
○ Submit a PR for your ART test
4. Share detection with the community
○ Make a Sigma rule and PR
○ Of course some of the exclusions are Org specific so be careful how/what you share
5. Make sure your detection pipeline is working
○ You need to make sure your whole pipeline is working.
○ Did the last update to your SIEM change something that prevents future events from triggering your alert?
○ Use Schedule Tasks, CI/CD pipeline, Docker, etc to launch the ART test on a regular basis
○ Remove the test system from the alert to avoid SOC Analyst fatigue
6. Create the IR Playbook
○ Before your SOC Analysts can actually handle this alerts, they need to have a step by step guide
○ Will try to base on a opensource project like https://github.com/atc-project/atc-react
○ There's also a good SANS presentation that propose a very clear Flow chart
○ I'm working on open sourcing some Playbooks I've built at work as well.
○ You should build a training for your current and future analyst.
○ Something that is easy to consume.
With all those steps you have come, imo, full circle on your detection.
Unmasking the Cameleons of the Criminal Underground: An Analysis From Bot To Illicit Market Level
Large corporations have access to sophisticated anti-fraud systems that monitor dozens of signals each time a customer or employee logs into their web portal. Past investigations have shown that malicious actors use malware to build profiles of their victims, and create virtual environments that replicate precisely the victims’ computers’ fingerprints. These profiles can be loaded up in specially crafted browser plugins and used in account takeover attacks. These profiles are sold on private markets and can fetch in the hundreds of dollars when they also include the cookies and credentials of the victims for financial institutions. The aim of this presentation is to map over a period of a month all of the Canadian activities of a machine fingerprint market. Our analysis extends past research first by developing a new understanding of how, and which, Canadians are targeted by this type of attack. Secondly, it presents models that predict not only the price of profiles for sale but also which profiles will end up being sold among the thousands that are for sale. We present estimations for the Canadian market for profiles for sale, and propose hypotheses as to the size of the impact of these illicit activities.
Large corporations have access to, and use, incredibly sophisticated anti-fraud systems that monitor dozens of signals each time one of their customers or employees log into their web portal. These signals include what browser is used, what plugins are installed, and even the language of the users’ software. Past investigations have shown that malicious actors use malware to build profiles of their victims, and create virtual environments that replicate precisely the victims’ computers fingerprint. These profiles can be loaded up in specially crafted browser plugins and used in account takeover attacks. These profiles are sold on private markets and can fetch in the hundreds of dollars when they also include the cookies and credentials of the victims for financial institutions. The aim of this presentation is to build on past research and to map over a period of a month all of the Canadian activities of a machine fingerprint market. Our analysis extends past research first by developing a new understanding of how, and which, Canadians are targeted by this type of attack. Secondly, it presents models that predict not only the price of profiles for sale – i.e., what makes a profile more valuable – but also which profiles will end up being sold among the thousands that are for sale. Through these analyses, we end up with estimations for the Canadian market for profiles for sale, and propose hypotheses as to the size of the impact of these illicit activities on the Canadian economy. The market for fingerprinting victims is growing exponentially, and is promising to be, along with ransomware, one of the biggest threats of the coming year. With more detailed knowledge about this problem, companies and individual victims will be better suited to protect themselves against these attacks, and limit the monetization of the criminal underground.
Repo Jacking: How Github usernames expose 70,000 open-source projects to remote code injection
Due to one small Github feature, some projects that depend directly on a Github repository are vulnerable to remote code injection. This talk will discuss novel research that was conducted to determine the prevalence of an obscure vulnerability related to Github project dependencies. The research demonstrates that this vulnerability, repo jacking, is exceedingly widespread and affects over 70,000 open-source projects. We will explain the vulnerability itself, what caused it and how to exploit it, as well as how we scanned a large percentage of open-source projects for this vulnerability. Finally, we will also discuss mitigations and how to protect yourself and your projects from it.
Does your project depend on a Github repository? It might become vulnerable to remote code injection simply due to one small Github feature. This talk will discuss ‘repo jacking’, an obscure supply chain vulnerability that allows attackers to hijack Github repositories and achieve remote code execution through dependency injection. This vulnerability has become exceedingly widespread in open-source projects and over 70,000 projects are affected. This vulnerability can affect any language and has been found to impact small personal games, huge web frameworks, cryptocurrency wallets, and everything in between. Come learn about this vulnerability, what causes it, why it has gone unnoticed for so long, and how to exploit it. Learn how you too can scan all open-source projects for this vulnerability, look for other similar vulnerabilities, and build dependency graphs to fully understand the impact of these types of issues. Finally, come hear about the outcome of this analysis, see how prevalent it is, who is impacted, and discuss some important mitigation strategies that you can use to protect your own projects from this, and other supply chain attacks.
Data Science way to deal with advanced threats.
Is your SOC flooded with False Positives, but you are afraid to raise the rules' thresholds as this will allow advanced attackers to stay under the radar? Are your SOC analysts overwhelmed with the amount of data that they have to go through in order to give initial assessment of a security event? In this talk we will share Data Science methods that proved successful in addressing the above mentioned challenges in our corporate setup. Specifically, we will go over combining Unsupervised and Supervised Learning (Elastic and Scikit-Learn), advanced visualizations providing "light speed" deep dive into anomalies triage and environment monitoring (Python and Plotly dashboard). We will demonstrate how all this was used to detect distributed credential attacks that stayed under the radar of other solutions while saving time to our analysts.
The talk will start with an explanation of the flexibility that the Machine Learning (ML) approach brings compared to the static rule based one. (Throughout the talk, we will be following a credential attack T1078 example for illustrative purposes, but it will be explained how the suggested approach generalizes to other Mitre ATT&CK TTPs.) Specifically, the latter suffers when thresholds change over time and/or vary from one monitored entity (corporation/user/server/website/etc) to another. This leads to either attackers being able to "stay under the radar" or analysts being flooded with False Positives.
First part of our response to this challenge consists in utilizing Unsupervised ML for anomaly detection, which performs historical profiling of sources and outputs the measure of deviation of a given observable from the "norm". This can be done in a number of ways, but we currently use the Elastic ML component. Taking into account the recent license change announcement by Elastic, we mention that Elastic ML can be substituted with free open source solutions, for example, Python and Scikit-Learn ML library.
This is not the end of story, as advanced attackers understand that their activity is being monitored and are using automation tools to bypass detections. Thus, even though, the first part of our solution considerably reduces the amount of entities one needs to analyze (roughly from millions to tens of thousands in our environment), this is still not feasible for our analysts. Thus, the second one consists in tracking anomalies corresponding to various attackers in various log sources and leveraging Supervised ML for aggregating risk. Again, a number of options are available, but we specifically use free open source Scikit-Learn ML library.
Finally, we arrive at the last challenge: how can analysts monitor an environment abundant with anomalies of not easily interpretable ML models and exuberance of data coming from various types of logs? We address this issue by providing a front-end written in Python and using Plotly dashboard (we use only free open source components, while the latter library has also a commercial offering). It allows analysts to interactively monitor the security environment and provide prompt initial triage for any of the anomalies. It includes a novel (to our knowledge) way to succinctly visualize the most pertinent features of a large amount of events surrounding the potential incident (weighted-chains).
We conclude our presentation with a demonstration of our approach based on real, though anonymized, data. It represents a subsample of one of the distributed attacks that our solution detected, and all other available to us solutions missed. Additionally, we show why analysts performing triage reported saving time on processing of tickets.
Damn GraphQL - Attacking and Defending APIs
Security teams are in a never ending race against new uprising technologies. Often, these technologies are not secure by default and require deep research to defend them, ain order to succeed in balancing technology adoption with security. The challenge with new technologies is that the security knowledge and tooling may not be as mature as with older technologies. This talk will provide insight into GraphQL, a REST API alternative and focus on how to run security tests against it, as well as defend against the various possible attack vectors.
WIth the uprising of GraphQL as a technology, a query language made by Facebook, security professionals must be ready for the day GraphQL hits their company’s networks.
In this talk, we will walk through GraphQL basics, followed by a deep dive into the various GraphQL attack vectors, from Information Gathering to Denial of Service and Injections.
Additionally, we will discuss a recent security platform release - Damn Vulnerable GraphQL Application (DVGA), a platform made for security practitioners to learn GraphQL and its various weaknesses in a safe testing environment.
Building CANtact Pro: An Open Source CAN Bus Tool
Ever wanted to build your own hardware tool? In this talk, we'll discuss the design and release process for the CANtact Pro device. From PCB design to driver development, there's a lot of steps that go into bringing a hardware idea to market. This talk will give you a better understanding of this process and how you can launch your own hardware product. We'll talk about open source tools for designing PCBs, writing cross-platform drivers using Rust, the economics of releasing a device, and the unavoidable logistical headaches of building hardware.
Back in 2014, I launched an open source CAN bus tool called CANtact. This was one of the first widely available CAN bus tools that was open source and low cost. Since then CANtacts have found their way into many automotive companies, government agencies, and hobbyist's tool boxes.
CANtact Pro is the successor to the CANtact device. It adds isolation, high speed USB, CAN-FD support, and a case. This project was launched through Crowd Supply and shipped to backers in late 2020.
This talk will discuss the process of developing, releasing, and selling an open source hardware device. We'll cover the device design process and the logistics of bringing it to market. If you've ever wanted to release your own hardware tools, this talk will give you an understanding of how to do it.
Burnout: Destabilizing Retention Goals and Threatening Organizational Security
Several trends are now colliding to make burnout among security professionals a greater threat to business continuity than ever before. From alignment of deployment decisions with employee training to judgement-free skills assessments and engaging upskilling, every organization can take common sense-yet-uncommon steps to prevent and address burnout, and increase security talent retention.
Did you notice a shift in your mental health and/or your colleagues? Burnout was at an all time last year due to the surreal 2020. As we approach 2021, we recognize how critical mental health plays when accomplishing goals and productivity output. This talk dives into the factors that lead to burnout among security professionals, the clear line between burnout and failure to retain cybersecurity talent, and how to invest in your team to make sure your team is able to thrive during stressful times.
Blurred lines - The mixing of APTs with Crimeware groups
State-sponsored actors and APT groups are not necessarily the same. A state-sponsored actor can be defined as an APT that is supported in some way by a state. This does not automatically make all APTs state-sponsored. APT actors that provide hacking-as-a-service are not necessarily a state-sponsored actor because they can’t be tied to a specific state — they will work for whoever pays the most. But this doesn’t mean that they shouldn’t be considered an APT. These lines get even blurrier when an actor has the characteristics and behaviour we observe in Gamaredon and Prometium groups. These groups whose main interest has been espionage, without any indications of being interested in using crimeware techniques to monetize their activity. Which should put them outside the crimeware gang definitions, however their behavior certainly resembles a crimeware gang rather than an APT.
Our presentation shows there is a space for the second-tier APT classification, one where the actor provides breach services to a larger actor, almost mimicking what happens in the crimeware scene, where some groups just gather credentials which they then sell to other crimeware groups. There are other groups that may offer hacking-as-a-service, but rather than working for the highest bidder, they serve a specific country or group, perhaps to align with their own intentions. At the same time, these groups will do whatever is best to maximize their gains. The advantage in this case is that they benefit from the “protection” of the APT for which they provide the services. Finally, this second-tier category should also include the APTs that lack the sophistication of others and often have their operations exposed due to bad opsec or amateuristic mistakes. We believe that challenging the status quo on Gamaredon and others that could fit the previous definition, is beneficial as a whole. It will help organizations better understand the threats that they must focus their resources on. The fact remains Gamaredon remains a notoriously prolific group operating without any constraints on a globally impacting level.
Forensicating Endpoint Artifacts in the World of Cloud Storage Services
In this presentation, I will discuss the key forensic artifacts that can be used whenever DFIR professionals encounter cloud storage services into the host such as OneDrive, GoogleDrive, Box and Dropbox. These are all essentials especially when the attacker or insider threat leverage these services to exfiltrate data. I will also show how to perform data acquisition to get these artifacts in forensically sound manner.
Today we are embracing the benefits and advantages of having cloud storage in most environments especially now when everyone is working work from home and data transmits from one place to another by the use of cloud storage services such as one drive, box, dropbox & google drive. There are a couple of artifacts on the endpoint side that gives us the ability to see the bigger picture when these cloud services are being used to perform data exfiltration and any malicious actions. In short, cloud storage data can be more accessible on the local device and can contain files and metadata distinctly different than the current cloud repository. I'm going to show how to perform data acquisition on these cloud storage applications installed in endpoint and what are those metadata and evidence that we can extract from the forensics standpoint.
See Something, Say Something? The State of Coordinated Vulnerability Disclosure in Canada’s Federal Government
Countries around the world like the US, the UK and the Netherlands have all adopted coordinated vulnerability disclosure (CVD) frameworks to better secure government computer systems. CVD is an approach to vulnerability disclosure that provides good faith external security researchers a procedure for disclosing security flaws. However, the topic has largely remained understudied and underutilized in the Canadian context, leaving federal government institutions potentially more vulnerable in the face of internal and external threat actors. This talk identifies best practices and the policy frameworks needed to harness the efforts of security researchers who find and disclose security flaws in Canada’s federal government software, web applications, and potentially hardware, vehicles and critical infrastructures before adversaries do.
Our research confirms that Canada is falling behind when it comes to the use of transparent and clear CVD frameworks in comparison to jurisdictions across the globe. Numerous federal laws, including criminal and copyright legislation, may also have a chilling effect on security research in Canada, with deficient whistleblowing protection laws that could otherwise protect people who disclose security vulnerabilities. Our work identifies the need for increased transparency and explicit regulation in Canada’s current approach to vulnerability disclosure at the federal level.
Yuan Stevens Ryerson Leadership Lab and Cybersecure Policy Exchange at Ryerson University; Data & Society Research Institute
Stephanie Tran Ryerson Leadership Lab
Florian Martin-Bariteau University of Ottawa
Bypassing advanced device profiling with DHCP packet manipulation
Network Access Control is a mechanism that checks security posture of a device before it is allowed access to a network. One of the oldest inspection techniques uses MAC address inspection, however, this is a trivial defence mechanism to bypass.
More advanced device profiling techniques deploy various techniques such as nmap scan , DNS inspection, DHCP inspection, SNMP checks, and OSI layer two protocols such as Cisco Discovery Protocol or Link Layer Discovery Protocol to identify the connecting device’s features. The mechanism explained in this paper is a manipulation or spoofing of DHCP packets to trick the advanced device profiling into thinking the attacking device is a legitimate one. Essentially, we are masquerading an attacking device with crafted DHCP packets so that the device appears to the inspection engine as a legitimate device. The proof of concept has been developed that allows an attacker to define the DHCP payload to mimic the fingerprint of an arbitrary device. To the best of the author’s knowledge, no such or similar tool is publicly available. Also, this is the first paper to describe in-depth a client-based DHCP attack which is neither denial of service (server starvation) nor a rogue server.
Just Add More LEDs: NSec 2018 and 2019 Badge Mods
Here's what you can do with a hardware badge once a con is over besides just hanging it up on the lanyard. Specifically, how to modify the Nsec 2018 'Sputnik' and 2019 'Brain' badges for off-board LED strips. e.g. as a monitor backlight, or just BLINKEN LIGHTS! With a bonus of how to do a hardware-port of a 503 party badge to the nsec 2018 badge.
I'll share all the parts lists with links and steps on how to do it. The LED strip mods are pretty simple and could be completed at home by those with some soldering experience, but I will show a few ways not to do it that I learned the hard way anyways. We will try to always include the "why it's possible" for those of you not familiar with HW stuff: Attendees will leave with parts lists and plans to add off-board LEDs to the 2018 and 2019 Nsec badges as well as the burning desire to make their own mods to other conference badges, whether or not they probably should. I love making my own use of HW -- usually involving a mess of wires and I hope it rubs off on you too.
Hacking K-12 school software in a time of remote learning
During the COVID-19 pandemic students across the country started to participate in e-learning for the first time. While the students had to adapt to new environments so did the software. Since school computers are now being taken home, educational software is exposed to a wider range of threats. If classroom management software was comprised, not only would a school district be affected, but attacks could spread to home devices. This talk will take an in-depth look at the zero-day vulnerabilities discovered by McAfee’s Advanced Threat Research Team in a K-12 classroom management solution used in over 9,000 school districts. The focus will be on how four vulnerabilities combined lead to a wormable unauthenticated remote code execution (RCE) resulting in System level privileges. This presentation will include a technical dissection of the network protocol leading to custom Scapy layers and a demo showing a single click exploit.
A deep dive of how four zero-day vulnerabilities in an educational management software can lead to a wormable unauthenticated attack allowing an attacker to gain system level privileges on every student computer on a network. This talk will cover the thought process and technical details of reverse engineering network traffic, creating custom Scapy layers, and the development of a single click exploit.
Request Smuggling 101
This presentation provides an overview of the latest research on HTTP Request Smuggling (HRS), an attack abusing inconsistencies between the interpretation of requests’ ending by HTTP request parsers. The attack occurs when, for the same stream, the proxy component sees one request while the web backend component sees two distinct requests. The most common risks will be presented, along with a set of payload variations and a live attack demonstration.
Load balancers and proxies, such as HAProxy, Varnish, Squid and Nginx, play a crucial role in website performance, and they all have different HTTP protocol parser implemented. HTTP Request Smuggling (HRS) is an attack abusing inconsistencies between the interpretation of requests’ ending by HTTP request parsers. What might be considered the end of one request for your load balancer might not be considered as such by your web server.
In this presentation, we will see how an attacker can abuse several vulnerable configurations. HTTP Request Smuggling (HRS) enable multiple attack vectors, including cache poisoning, credential hijacking, URL filtering bypass, open-redirect and persistent XSS. For each of these vectors, a payload will be showcased and explained in-depth. Also, a live demonstration will be made to see the vulnerability in-action. Aside from exploitation, we will show how developers and system administrators can detect such faulty configurations using automated tools.
By the end of this talk, security enthusiasts from any level will have solid foundations to mitigate request smuggling, a vulnerability that has greatly evolved in the past 15 years.
CrimeOps of the KashmirBlack Botnet
We will take you down the rabbit hole into our journey to expose the KashmirBlack botnet. Explore the DevOps behind the botnet and go deep into the bits-and-bytes of the infection technique.
The KashmirBlack botnet mainly infects popular CMS platforms. It utilizes dozens of known vulnerabilities on its victims’ servers, performing millions of attacks per day on average, on thousands of victims in more than 30 different countries around the world.
Its well-designed infrastructure makes it easy to expand and add new exploits or payloads without much effort, and it uses sophisticated methods to camouflage itself, stay undetected, and protect its operation.
It has a complex operation managed by one C&C (Command and Control) server and uses more than 60 - mostly innocent surrogate - servers as part of its infrastructure. It handles hundreds of bots, each communicating with the C&C to receive new targets, perform brute force attacks, install backdoors, and expand the size of the botnet.
Takeaways: - Security is only as strong as the weakest link. - CMS platforms have the potential to be the weakest link in the security chain, because they are so modular with thousands of plugins and themes. Owners are notorious for poor cyber hygiene, using old versions, unsupported plugins and weak passwords. It’s not that CMS platforms are very vulnerable like they have the potential to be. - A large scale botnet doesn’t necessarily need an exsotic exploit to expand, it can exploit old vulnerabilities to infect millions of victims. But in order to create a stable and long-term botnet, it needs a well designed agile infrastructure. - The COVID pandemic has created more opportunities for hackers, as more businesses digitize their operations. Just like the world adjusts and more businesses go online, the community needs to adjust and aducate for better security hygiene.
Ofir Shaty Imperva
Sarit Yerushalmi Imperva
dRuby Security Internals
dRuby is a "distributed object system" built into Ruby that is generally known to be insecure, but which has never been properly audited... until now. In this talk, we will discuss how dRuby works, where its insecurities lie, and how it is much more insecure than previously understood to be — which is a feat, considering that dRuby already provides code execution as a service. This talk will focus on a discussion of the dRuby API, its internals, and its underlying wire protocol, covering the security issues inherent in each along the way. As part of the this, we will also demonstrate several novel exploitation techniques that can be used against both dRuby servers and clients, the latter of which have not been known to be vulnerable until now. Following this, we will discuss some of our work to harden dRuby against each of the issues we identified. We will then close our talk by covering our work to exploit the exploits used to compromise dRuby-based services for some very ironic honeypotting.
dRuby is a "distributed object system" for Ruby (think CORBA or Java's RMI). Included in the Ruby standard library and implemented in vanilla Ruby without native extensions, it provides a simple-to-use interface to interact with Ruby objects from other Ruby processes, locally or over a network. While dRuby makes it fairly easy to expose objects and their interfaces to other processes, including those running on separate systems, it leaves a lot to be desired in terms of its security. While its own API documentation warns coyly of its insecurity with a simple-to-understand example exploit written in Ruby, the actual implementation and protocol of dRuby are not documented at all, nor are the actual risks dRuby exposes. While dRuby is well known to be a readily exploitable service enabling remote code execution, the underlying protocol exposes a number of additional risks that enable not only alternate methods of compromising dRuby services, but also the means to compromise dRuby clients.
In this talk, we will open with the background of how we found dRuby being used by a popular remote debugging dependency. We will then shift to an overview and technical discussion of dRuby and its protocol as defined by its implementation, starting with some basic examples of how to use dRuby. Following this, we will walk through an analysis of the network protocol guided by the traffic generated from our examples, and discuss how the data is processed, including a high-level discussion of the dual client-server peer-to-peer model used in dRuby. As part of this, we will also discuss the implementation of dRuby's remote method call scheme, data serialization, and proxy objects, including the default object reference scheme and ID mapper.
Throughout our discussion of dRuby's API, internals, and wire protocol, we will bring attention to and discuss relevant risks and vulnerabilities — and how they make dRuby fundamentally unsafe — and demonstrate several novel proof-of-concept exploits targeting dRuby services and clients. We will also discuss some of the existing advice and documentation for "securing" dRuby and how it fails to guard against dRuby's inherent issues.
Following this, we will briefly discuss our efforts to harden dRuby; the kinds of protocol, logic, and API changes needed to negate its issues; and additional considerations that should be taken into account not to expose further security issues.
Lastly, we will swing back to offense — or rather at offense — and close our talk with a discussion on the insecurity of existing dRuby exploits, and show how you can penalize your pentesters for using off-the-shelf exploits. As part of this, we will demonstrate an emulated implementation of the dRuby wire protocol that can be used to securely exploit dRuby services, clients, and exploits.
AMITT Countermeasures - A Defensive Framework to Counter Disinformation
AMITT (Adversarial Misinformation and Influence Tactics and Techniques) is an open-source framework for describing the strategic, operational, and tactical elements of influence operations. By enabling researchers, practitioners, and policy makers to communicate their findings, AMITT is bringing together an international community to help combat disinformation.
Last year we introduced our work seeding communities and training them on the practical application of AMITT, as well as the framework's integration into free, open-source threat intelligence tools.
This year, the Cognitive Security Collaborative introduces major updates to the AMITT framework which now includes a complementary set of countermeasures to be used against adversarial influence operations.
In this talk we address some of the major disinformation events of 2020 relating to COVID-19 and the 2020 US Presidential election. Additionally, we explore the practical application of AMITT countermeasures.
Critical Vulnerabilities in Network Equipment: Past, Present and Future
In this talk, we will discuss common vulnerability patterns in network equipment (consumer and enterprise routers, firewalls, VPN, TLS accelerators, switches, WAF, etc). This critical infrastructure is unfortunately a lot more vulnerable than most people believe, although its security stance has improved within the last few years. We will go through the history of these vulnerabilities, why they occur and what should we expect to happen in the future, as exploit protections in these devices improve.
Routers are considered easy to hack, and that's kind of true. But is that much harder to hack a home router than an enterprise firewall? Think twice before answering!
The purpose of this talk is to demonstrate the similarities in inner workings, technology, hardware and vulnerability density between every piece of network equipment, be it for home or enterprise.
We will walk through specific examples of vulnerabilities found in these equipments in the past and present. Vulnerability patterns will be identified, and we will discuss why they keep occuring and what circumstances led to them appearing in the first place.
Finally, we will discuss future trends for vulnerabilities in network equipment. And because it can't all be negative, we will also discuss how the constant hardening of these devices will make exploitation much harder (but far from impossible :) in the future.
Authentication challenges in SaaS integration and Cloud transformation
Enterprise companies are using cloud applications at an increasing pace. The Work-from-home (WFH) new normal has turned the Cloud transformation evening more demanding than ever. Software as a Service (SaaS) access model is prevalent for WFH as it enables devices to connect from the internet and the corporate network. Even though many enterprises today adopted SaaS solutions, a workable integration does not necessarily imply a secure one. Enterprises shall come up with a strategic solution to maintain security standards sustainably. Managing authentication in the Cloud is a complex problem, more complicated than the traditional, on-premise "Walled Garden" environments. Public Cloud applications reside in a more "open" and "shared" environment and therefore have different attack vectors and vulnerabilities. The conventional ways to handle authentication are not good enough to securely protect Public Cloud resources and SaaS applications from unauthorized access. In this presentation, I will go through some common SaaS integration security pitfalls, the risk of unmanaged Cloud identities, and explain why adopting an Identity provider (IDP) solution is critical to handle Cloud authentication security. The audience would also look at how a Cloud-based IDP solution tackles the Cloud authentication problems more intelligently than a traditional IDP.
This presentation is suitable for anyone interested in knowing how to tackle the Authentication challenges of Cloud transformation in a complex enterprise environment.
Now more than ever, enterprise companies are using cloud apps at an increasing pace. The pandemic outbreak has accelerated the digital shift. Work-from-home is the new normal, and this trend is unlikely to go away when the pandemic ends. This phenomenon has made the Cloud transformation evening more demanding. The access model of Software as a Service (SaaS) enables devices to connect from the internet and the corporate internal network - a prevalent access model for WFH.
We have seen enterprises rely more on business-critical SaaS applications, such as Google G Suite, Microsoft Office 365, and Salesforce. Some of them even have started to deploy their in-house applications on the Public Cloud Service Provider (CSP) 's platforms / Infrastructure as a Service (IaaS) like Amazon Web Services (AWS) and Azure. Even though many enterprises had adopted SaaS solutions, most are still earlier in the game or recently started their Cloud transformation journey. A workable integration does not necessarily imply a secure SaaS integration. To maintain the security standards with sustainability and scalability, enterprises must develop a strategic roadmap by adopting the industry-standard authentication protocols and moving away from homegrown authentication methodologies.
Managing authentication in the Cloud is a complex problem, more complicated than the traditional, on-premise (on-prem) environment. The conventional ways to handle authentication on-prem are not good enough to securely protect Public Cloud and SaaS applications from unauthorized access.
(1) SaaS integration authentication pitfalls
• The conventional on-prem environment is like a "Walled Garden", where business activities were conducted within the office or network boundaries, guarded by, and monitored under an explicit firewall policy.
• In contrast, Public Cloud / SaaS applications reside in a more "open" and "shared" environment. They are accessible to any user with any endpoint from any location and therefore have different attack vectors and vulnerabilities. An intelligent way to strongly verify a user's identity, a contextual authentication more than Multi-factor authentication (MFA), is critical to secure the Cloud and SaaS endpoints.
• One of the most common SaaS authentication design failures is when a single sign-on (SSO) solution is not adopted or enforced across the board. Each SaaS application has its identity store and password requirements. As a result, users must maintain multiple accounts manually, resulting in creating a gateway for attackers to get unauthorized access to various SaaS applications.
(2) Risk of unmanaged growth in Cloud identities
• Failing to adopt an SSO solution in Cloud migration causes another pressing problem: the rapid creation of SaaS and CSP platforms' cloud identities.
• A typical example of a poor identity lifecycle management is zombie SaaS accounts, where inactive users or former employee SaaS accounts remain active.
• Managing user account provisioning and de-provisioning in multiple-SaaS and CSP require a centralized identity management solution.
(1) Adopt an Identity provider (IDP) solution
• By extending SSO to Cloud applications with a single authentication point through an IDP, users can access cloud / SaaS apps using their corporate identities without sending their credentials externally. The IDP solution dramatically improves the overall user experience and provides secure and uninterrupted services by keeping one credential.
• Enterprise companies that have a long history might also have more legacy applications. Some of the applications handle basic authentication (e.g., username/password) themselves and usually use homegrown authentication methodologies that do not follow the latest industry standards. Adopting an IDP solution enables the enterprise to embrace standard authentication protocols like OpenID Connect, OAuth, and SAML, to integrate with various SaaS and CSP seamlessly. The standardization also reduces vulnerabilities in the overall IT environment and facilitates enterprises to meet compliance and regulatory requirements smoothly.
• It's essential to choose a good IDP solution that enables the security team to standardize the SSO connections to cloud applications and on-premises applications with a centralized policy framework.
(2) Tackle the Cloud authentication problems more intelligently using a Cloud-based IDP solution
• Most of the Cloud-based IDPs enable admin users to create policies that continuously assess risk and enforce policies to mitigate risks when they arise.
• In a Zero Trust Security model principle, the perimeter is no longer at the network level but now at the identity level. Cloud-based IDP leveraging machine learning and contextual-based authentication would help both users and administrators solve the "anywhere, anytime, from any device" access challenge more intelligently. Cloud-based IDP like Azure Active Directory provides services that automate the detection and remediation of identity-based risks.
Social bots: Malicious use of social media
This research focuses on the malicious use of social media, specifically Twitter, during the 2019 Canadian Federal Election Campaign. Social-bots have often been used in the past to manipulate public discourse through disinformation campaigns aimed at committing political interference. The mixed methodological approach combining descriptive analyses with interviews is used to draw a portrait of social-bots role during this electoral campaign. A digital analysis tool called Botometer is used to find social-bots within a database initially collected in 2019 by Commissionaires du Québec. This tool makes it possible to identify the social-bots and rate them with a score from 0 (not a social-bots) to 5 (most likely a social-bots), which will then be analyzed to determine how they inserted themselves into the political discussion during the period under study. The interviews conducted with experts in the field aim to deepen and give meaning to the results obtained previously. The results show that several social-bots did not publish content in English, the tweets analyzed are mainly retweets, thousands of users have been suspended, and the hashtags used promoted the election of Liberal Justin Trudeau to the detriment of Conservative Andrew Scheer.
This talk will be about a research project that focuses on the malicious use of social media, specifically Twitter, during the 2019 Canadian Federal Election Campaign. Social-bots have often been used in the past to manipulate public discourse through disinformation campaigns aimed at committing political interference. The mixed methodological approach combining descriptive analyses (quantitative) with interviews (qualitative) is used to draw a portrait of social-bots role during this electoral campaign. A digital analysis tool called Botometer is used to find social-bots within a database initially collected in 2019 by Commissionaires du Québec. This tool makes it possible to identify the social-bots and rate them with a score from 0 (not a social-bots) to 5 (most likely a social-bots), which will then be analyzed to determine how they inserted themselves into the political discussion during the period under study. The interviews conducted with experts in the field aim to deepen and give meaning to the results obtained previously. The results of the study show that several social-bots did not publish content in English (52% with a rating of 5), the tweets analyzed are mainly retweets (87% of the sample), thousands of users have been suspended since the last year, and the hashtags used promoted the election of Liberal Justin Trudeau to the detriment of Conservative Andrew Scheer. Additionally, the overall content is divided between positive and negative feelings, with a slight prevalence of positive content (51.01% vs. 48.99%). This talk's primary goal is to give the audience a better understanding of the research field on this very new and critical geopolitical issue that happens to manifest on the surface of cyber. This aim also to share with anyone interested in an established methodological approach tested with a Canadian case study.
Cryptography Do's and Don't in 2021
Do you feel unequipped to understand real world crypto attacks? Are you overwhelmed with the over-abundance of choices provided by any modern cryptography API, to make a secure decision while choosing a randomness provider, encryption scheme or digital signature APIs? Are you on top of all the latest happenings in cryptographic communities, to know which cryptographic primitives is deemed broken? Due to sheer lack of documentation of the chosen API, do you feel paralyzed on where and how to start designing or analyzing any cryptographic systems?
If any of these answers are "yes", come join me in this talk. I will be going over each cryptographic primitive like Random Number Generators, Encryption/Decryption algorithms, message authentication codes, digital signatures, password storage etc. We will be discussing common crypto insecure patterns observed in real world applications, best secure practices and what to be wary of. All this based on evaluating bunch of leading cryptographic implementations while not loosing sight of future-proofing applications. This should help security architects/developers while designing their crypto applications and security practitioners while auditing these system.
How to harden your Electron app
Let’s be honest — when you decided to build an Electron app, it wasn’t because of the framework’s stellar reputation for security. Like so many developers before you, you weighed your options and made a practical choice. But now you have to make the best of it and protect your users and their data. Hardening your Electron app is not straightforward, but it is also not impossible. Through a combination of threat modelling, careful separation of concerns, and simply reading the docs, you can achieve the security goals for your app. This talk is about how we built a secure password manager in a framework that’s infamous for being insecure. We’ll look at how the security model for our Electron-based frontend for 1Password, what pitfalls we encountered along the way, and how you can apply what we’ve learned to your own projects. We’ll also reveal our hardened Electron starter kit and invite you to see how it works — and try to break it.
Electron and web apps may never be the first choice for security-conscious developers, but they are an industry reality. We recently faced this dilemma at 1Password when we set out to build the new Linux desktop client for our flagship password manager.
Compromising on security was not an option. At the same time, building a web app was the only practical option. Undeterred, we set out to harden Electron to meet our unique client-side requirements.
I am not going to pretend we made it all the way — no software framework ever will. But we did end up with an app we are proud to call 1Password, and to entrust with our user’s most sensitive data.
I hope to share what we learned so that others in a similar situation will have an easier time. At the same time, I invite the community to see what we’ve built and look at what we’ve gotten right — or wrong.