Teleport sponsored this post.
The VerticalChange web-based cloud platform is committed to serving the community. From the start, VerticalChange was designed to empower the health and social services sector — through cutting-edge data management and analysis — and deliver services to individuals, families, and communities while measuring outcomes.
By hosting on AWS, VerticalChange gets a flexible and cost-effective cloud solution that is highly scalable. As an Advanced Public Sector Partner, the platform passes some of AWS’ most strict implementation requirements.
VerticalChange is HIPAA-compliant and adheres to a strict data encryption policy. Through its advanced permissions settings, clients can control access to information down to a minute level. The platform uses open source tools such as Kubernetes, Teleport, and PostgreSQL to achieve security and availability. VerticalChange Chief Technology Officer Dylan Stamat — in this interview with Ben Arent — explains how they got there.
Can you tell us about Vertical Change?
VerticalChange is a data system for the social sector. Think of it as a Qualtrics-meets-Salesforce CRM with a dynamic form builder and analytics platform. Our customer base consists of agencies such as mental health agencies, homeless shelters, early childhood education, a lot of counties and the like. A lot of data that has to comply with HIPAA, FERPA, and some more strict security rules.
Can you walk me through how customers use VerticalChange?
I have family involved with some of these agencies, and I knew it was very paper-based. You’d have file folders full of people’s records and often there are only a few people working at these agencies.They typically have to provide their quarterly reports to get more funding. Often that process involves just sitting there and painstakingly opening file folders one page at a time and then jotting downtick marks into an Excel sheet. It’s really laborious and we wanted to work with them to digitize the process.
You have been through the AWS Public Sector Partner Program. Can you describe the technology choices you picked and how Amazon helped you obtain that?
Back in the day, Amazon wasn’t signing any business associates agreements (BAA) for HIPAA. We were actually one of the first customers to get a BAA signed with Amazon. It was a lot of work. For example, we had to run PostgreSQL on our own EC2 instances because there wasn’t a PostgreSQL RDS implementation yet. AWS required the use of dedicated instances as well. As a startup, it was expensive, but that was the only way you could do it. You weren’t able to use anything virtualized. It was handholding in the beginning — a V1 infrastructure — to get that BAA signed. Now things are a lot different.
Now there are a lot more services that help get you there. Did that change how you picked open source technologies at the beginning, going from bare metal to achieve compliance?
It did. We slowly moved some of the services over to the latest and greatest stuff, depending on if it made sense, and then we did a full rewrite of our infrastructure. Now we run on Kubernetes and use a lot of open source tooling. That actually covers a lot of our compliance controls because HIPAA is comprised of a bunch of security and privacy rules. We are also looking at some higher-level certifications such as NIST 800-53 and moving onto GovCloud, so we want those boxes to be checked and to be using services that are already compatible.
How did you come across Teleport, and what problem did it solve for you?
Prior to moving to the Kubernetes workflow, we were using Chef, which placed SSH keys on host. Now we’re using Terraform and a lot of other open source tools. Just managing machine access was a huge issue. We’d have people come on board and eventually leave. Being able to manage their SSH keys wasn’t always very easy with the Chef process because you’d have abandoned associate keys on certain machines within the cluster.
They would be abandoned because you have an employee leave?
Some of the decommissioning processes would fail and we had keys on machines that shouldn’t be there. Moving over to a system where we’re using SSO and Teleport helps a lot because we’re able to set rules up for our employees and provision access to certain parts of our infrastructure based on employee role and being issued as short-lived certificates.
Do you use Teleport to provide Kubernetes access?
Yes, as a big thing for us was auditing access for kubectl. As you move into the higher compliance levels, knowing who has access to machines is something you need. It’s nice to automate that process and have these logs so we can reference them if we need to.
What was your process of adopting Kubernetes? Is there a straightforward way to migrate to Kubernetes — and if you’ve got a product deployed to Kubernetes, a seamless way to migrate it?
The first step was just containerizing our application, getting our team on board with throwing everything in containers and being comfortable with Docker. That was a bit of a learning curve. We did crash courses into how Kubernetes works — people are hesitant to jump on the Kubernetes bandwagon because it’s a large system. They don’t know all the ways it works together. Once you containerize your existing applications, throwing them into a Kubernetes installation is pretty straightforward.
We created a brand new AWS account, set up account sharing for some services, and started from scratch with brand new IAM roles — brand new everything. All created via Terraform with access provided by Teleport. So we started from a nice, fresh, clean installation and worked from there.
For Kubernetes, how do you deal with upgrades?
Rancher makes upgrades super easy. It’s a Kubernetes-type implementation and administration console. Rancher, with Kubernetes, almost has a Platform-as-a-Service-type setup, so it’s really easy to run different services, set up the network for each of them, and have a single pane of glass into how our entire infrastructure is running and what services are running.
Our deployment is fully scripted, so we’re able to easily create new environments of our whole platform. We have a build process and a staging implementation. We can throw up another whole environment with just one command. From there, we’re able to test any of these integrations before they happen. We also use Flagger, so we have canary deployments set up as well. When we deploy stuff, if there’s an uptick in errors coming from the application or anywhere within our infrastructure on the Kubernetes side, it rolls back to previous versions.
Teleport is a proxy that provides access to SSH clusters in EC2 hosts. This was the beginning of your journey with Chef, but now you’re migrating to also providing Kubernetes access to the cluster.
Exactly. Prior to using Teleport, our implementation was a bastion machine that everybody would connect to and hop onto different machines from there. Now, it’s a lot more simple. Teleport is nice because it also allows web access to internal applications. I know you guys are doing a database implementation of Teleport, which I’m really looking forward to.
How will Teleport Database Access, which allows organizations to use Teleport as a proxy to provide secure access to their databases, be impactful for VerticalChange?
This is huge for us because it allows us to use our existing Teleport implementation and allow role-based access to our database instances. The most sensitive data within our network is in this database and we’re very picky about who has access to that data. Being able to audit access and track access logs is super critical. Right now it’s a pain to do.
Have you gone through a SOC 2 process?
We’re actually in the middle of that right now. If you drink coffee, make a lot of it because it’s a lot of documentation. When a new employee is onboarded, here’s four controls that you need to document and how you do it. It’s not actually that bad — SOC 2 Type 1 is actually pretty straightforward. Once you start looking at 800-53 and 800-171 on NIST, you start getting a lot more detail so that one control might have four subcontrols. With NIST 171, it might have 20 subcontrols and you have to document everything. Having Teleport gave us immediate access to audit logs and pre-documented answers to a lot of security controls out of the box.
Teleport helps companies obtain FedRAMP certification which has lots of parallels to HIPAA as far as obtaining NIST controls.
This is actually super helpful because these NIST controls — going through the federal process — it’s just really complicated. You’ll often need to bring in a third party to help with implementation, then you need a separate third party to do the actual review. It’s a long process. It’s hard sometimes to choose the right tool. We’re pretty picky about what we choose in terms of open source tooling.
What do you normally look for when selecting an open source project?
We look for quite a few things. We look to see who the product’s maintained by, what the activity level looks like, the pulse of the project. Also, if it’s ran by a private company, does it have support contracts so you have prioritized support if you need to engage with them. Also, is it something that’s managed by a different body, such as Kubernetes being within CNCF. That’s a big thing for us. CNCF is nice because it gives open source projects governance. They’ll have deployment schedules. They’ll also have a certain level of testing criteria that needs to happen. CNCF does a really good job of finding out how open source projects fit in the ecosystem within other projects.
Why would you go for an open source product as opposed to proprietary?
For us, it was a matter of preference — we like to support open source. We always have — all the engineers on our team. We also like being able to dig in with open source software when we need to by contributing to projects. So yeah, just the ability to really have a clear picture of what you’re using is a big point.
Where would you say VerticalChange has its most sensitive PII data currently?
Most of the stuff is stored within our PostgreSQL instances and it’s all data that falls under HIPAA. So a lot of mental health records and other very sensitive information. Having a system that is high availability is important because as a person working — let’s say in mental health, if you have somebody call in or come to the office who’s suicidal — you’ll need to be able to look up their records immediately and be able to respond very quickly. With paper, running over to a file cabinet, looking stuff up is time-consuming, so being able to immediately have access to clients’ history is important.
The growing remote-working trend is spurring digital transformation for these organizations that need to access records anywhere, but also give people critical care at a critical time.
Absolutely. We immediately added in video capabilities into the application because a lot of our agencies were hit really hard with the pandemic.
Any tips for readers trying to reign in these highly regulated environments and looking to adopt open source technology?
Make sure that the project you’re utilizing is something that’s been in use for a while, that has activity with paid service contracts if you need to see consistency. There’s a lot of stuff that comes and goes, so it’s important to be able to navigate it well. The more experienced people on whoever’s team is involved — they’ve seen projects come and go, so they know what to look for. It’s important to look at it from a security standpoint of anything you use so you’re able to determine if this is something that needs all the controls that you can check off in the future.
There’s the AWS shared responsibility model. But ultimately, you’re responsible for configuring your infrastructure. This is where Teleport and Kubernetes and other tooling can help secure this top layer. Can you talk about some other tools that you’ve helped secure in this area?
A lot of AWS tools will only get you so far. There’s so many options now and these tools change often. You have to pick carefully and ensure that you can support and properly manage your tools before they become part of your process. Otherwise, tooling itself becomes too much of a process and you’ll be drowned with tooling.
That said, we do use a lot of AWS tools. We use CloudTrail for logging. For non-repudiation, we ship logs to a separate AWS account that’s not accessible by us. We use a lot of their security suite, so we use GuardDuty and other security tooling that they make available.
Though HIPAA isn’t as prescriptive as FedRAMP, it can be a lot more vague. How does VerticalChange meet HIPAA requirements?
It’s exactly that: very vague. There isn’t a real governing body that will certify HIPAA from the Department of Health and Human Services. When there is a privacy rule or a security rule, I’ll often look at NIST to see what the guidelines there are, because that’s something we’re eventually going to do. I just look up the strictness ladder of what I’ll need to do next in terms of meeting some of the FedRAMP requirements and then implement that in the HIPAA context.
Another point of any tool that you purchase or buy — compliance is something you have to continuously obtain even after you achieve it. Compliance isn’t a one-time checkbox.
Definitely. After you go through SOC 2 Type 1, you have to go through SOC 2 Type 2. That’s a continuation of Type 1 and you will continually have to revisit it. A lot of these rules and controls are continuational.
Featured image via Pixabay.
InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.