scarface74 6 years ago

Why would you run your own Postgres instance on EC2 within AWS? That kind of defeats the purpose of paying for AWS. Why not use Postgres RDS or Aurora?

It makes some sense with Sql Server and Oracle in a few cases because of licensing but hosting your own Postgres instance on AWS is the worse of both worlds -- you're paying more than with a cheaper VPS and you have to do all of the maintenance yourself and not taking advantage of all of things that AWS provides -- point in time restores, easy cross region read replicas, faster disk io (Aurora), etc.

  • malisper 6 years ago

    Hi. I'm one of the database engineers at Heap. This is a good question. There are several reasons why we use EC2. First of all, I will say I love RDS as a product. We actually do use RDS for a number of our services. We use Postgres on EC2 only for our primary data store. As for reasons why we use EC2:

    Cost - Our primary data store has >1 Petabyte of raw data stored across dozens of Postgres instances. The amount of data we store is at the point where RDS is too expensive for us. The cost of an instance on RDS is more than twice the cost on EC2. For example, an on-demand r4.8xl on EC2 instance costs $2.13 an hour, while an RDS r4.8xl costs $4.80 an hour.

    Performance - The only kind of disk available on RDS is EBS. EBS is slow compared to the NVMe the i3s provide. We used to use r3s with EBS and got a major speedup when we switched to i3s. As a side note, the cost of an i3 is also less than the cost of an r3 with an equivalent amount of EBS.

    Configuration - By using EC2 we can configure our machines in ways we wouldn't be able to if we used RDS. For example, we run ZFS on our EC2 instances which compresses our data by 2x. By compressing our data, we get a major cost saving and a major performance boost at the same time! There isn't an easy way to compress your data if you use RDS.

    Introspection - There are times where we've needed to debug performance problems with Postgres and EXPLAIN ANALYZE won't suffice. A good example is we used flame graphs to see what Postgres was using CPU for. We made a small change that resulted in a 10x improvement to ingestion throughput. If you are curious, I wrote a blog post on this investigation: https://heapanalytics.com/blog/engineering/basic-performance...

    • _msw_ 6 years ago

      Hi, I'm one of the engineers at Amazon working on EC2.

      You can also run get bare metal I3 instances by launching the "i3.metal" instance type. You don't need to wait for the Nitro hypervisor, you can go with no hypervisor at all.

      • malisper 6 years ago

        When we first launched the i3 instances, i3.metals weren't available. We've been wanting to do experiments with i3.metals, but we've been unable to get confirmation that our reservations will transfer over. Until we know that we'll be able to transfer the reservations, there isn't any reason for us to do experiments.

        Since you work at Amazon, do you have a sense of big of a difference there is in performance between i3 and i3.metal for database workloads like Postgres?

      • kalmar 6 years ago

        And you get 4 extra cores and 24 gigs of ram "for free" vs i3.16xl (which is what we use). I think we looked into switching but it wasn't clear if the reservations could be switched over.

    • mattbillenstein 6 years ago

      Curious re ZFS - any stability issues? Are you leveraging snapshots for backups? Special configs or vanilla ZFS-on-Linux?

      • malisper 6 years ago

        We are running vanilla ZFS-on-Linux. We don't use snapshots for backups as the Postgres backups are more convenient. Postgres provides point in time recovery, which is useful. There are also tools like wal-e for automatically writing and restoring Postgres backups to S3.

        As for stability, have been two major sources of instability with ZFS:

        The first issue was with the default value of arc_shrink_shift. By default, ZFS will evict ~1% of ARC, the in memory file cache, to disk at a time. Our machines have several hundred gigs of ARC, so ZFS was evicting several gigs of data to disk at a time. This was causing our machines to frequently become unresponsive for several seconds.

        The other issue is for some reason ZFS will lock up for long periods of time if we delete several hundred gigs of data. We haven't been able to identify a root cause of the problem. So far we've worked around this problem by adding a sleep in between data deletions.

        Other than these problems, ZFS has worked pretty well for us.

        • mattbillenstein 6 years ago

          Cool, thanks for the info -- glad to see people pushing these tools much harder than I plan to in the near future ;)

      • gigatexal 6 years ago

        I too want to hear about this in production as I’m thinking of moving ours to it as well.

    • scarface74 6 years ago

      Thanks for the explanation.

    • kakwa_ 6 years ago

      I was also looking at i3 instances. But the fact that storage is not persistent kind of puts me off.

      How do you manage this?

      Also, how frequently do i3 instances fail?

      • malisper 6 years ago

        We store two copies of every piece of data on two different machines. When a single machine goes down, we have code for spinning up a new machine and restoring the data that was on the machine.

        Over the course of a month, we usually have about one machine fail.

  • aidos 6 years ago

    In my experience you get much better performance outside of RDS and you can inspect and tune it better. Maybe I’m missing something and no doubt I could put more work into it but we’ve actually talked about moving our RDS dbs back to EC2 because there are plenty of queries we do that are embarrassingly slow on RDS when they shouldn’t be.

    Also, you can’t replicate out of RDS. I like to know where my data is and how to bring it back online during a disaster.

    • chc 6 years ago

      I've worked on a project that migrated from MySQL on EC2 to MySQL on RDS and then back to EC2 because the performance was massively worse — a process that took a few hours before now took days. We contacted Amazon support to try and resolve whatever was going wrong with the RDS instance, and their response was basically "Yeah, we don't guarantee performance on RDS. If you want to maximize performance, you should run your own DB on EC2."

      • nodesocket 6 years ago

        It makes sense because of the dependency on EBS, but what instance type were you using on RDS? Using provisioned IOPS RDS disk?

        • chc 6 years ago

          We tried a few very large instance types to see if that made any difference, and provisioned IOPS did help, but everything was still slower than the EC2 DB and it cost a lot more on top of that.

    • cheeze 6 years ago

      RDS is better than running your own but in my experience, RDS was always kind of a meh product. One of my friends was telling me that they don't dogfood RDS internally since relational databases aren't allowed at Amazon.

      • Twirrim 6 years ago

        That's true to a fair degree. If you're running your AWS service using a relational database in AWS, they'll have your guts for garters. However, that's because of the sheer level of scaling that services have to operate at. Consider the number of requests per second that they handle. Scaling gets fiendishly complicated under those circumstances.

        That's not to say that it's impossible to do. Facebook ably achieves it. It's just that the level of expertise that would be required across so many services is significant. AWS has hundreds of services, each of which would need to be hiring highly skilled DBAs to handle the sharding etc. etc. etc necessary to scale. It's easier to just point people at DynamoDB where they've effectively handled all those needs for you, you just have to put a bit more logic in your application side, which also has the neat property of scaling horizontally more easily.

    • chucky_z 6 years ago

      You can replicate out of RDS without any problems. Shoot, you can run a master in EC2 and a replica-only in RDS.

      I do this presently because I have some custom stuff I do for MySQL which needs it's own EC2 instance because RDS doesn't support it.

      • aidos 6 years ago

        My knowledge may be out of date now - is that with Postgres? Hopefully so.

        My gripes with the performance issues still stand though. I have queries that take 30-40 seconds on RDS that complete in milliseconds on an EC2 instance which is much smaller.

        • chucky_z 6 years ago

          That's super bizarre. Are you running MySQL with no ACID on EC2 while leaving RDS at its defaults or something?

    • sokoloff 6 years ago

      I seem to recall there was a time in RDS product lifecycle where replication out was not available. It seemed to me (at the time) a mechanism to ensure lock-in. I now think it was just natural product evolution, adding features as they were able to do so, which varied by DB engine.

    • mrep 6 years ago

      What do you mean by replicate because my old teammate was able to set up a slave from a RDS master to a non-RDS database.

    • scarface74 6 years ago

      Postgres RDS or Postgres Aurora?

      What’s more likely. That your one data center has a disaster or a globally redundant AWS infrastructure.

  • wahern 6 years ago

    Sometimes you're forced to use AWS for a very specific reason, technical or non-technical, subject to change in the future. (For example, literally being given a large 7 figure "credit" as occurred at a previous job.) Letting the application become completely and irreversibly dependent on [expensive] AWS services may not be desirable.

    But who are we kidding... it's impossible to resist and its why AWS rakes in cash.

    That said, not being able to use vDSOs for time querying APIs isn't just a problem for databases, it's potentially a significant problem for asynchronous software, like Node.js, that do userspace event scheduling bookkeeping. Typically every iteration of the event loop performs at least one time query, but depending on how the software is coded often there might be one or more time queries per event processed per event loop iteration.

    • scarface74 6 years ago

      How are you tied to AWS by using AWS RDS Postgres? You move your data and your code doesn’t change besides connection strings.

      • MBCook 6 years ago

        There is no magic in PG itself, it’s just normal Postgres. You can dump and reload your data somewhere else.

        But they provide management, point in time recovery, and other nice things. All done for you automatically. If you move you have to take that on yourself (if you don’t move to another provider that does it too).

        • scarface74 6 years ago

          Yes I understand that. But you still aren’t “locked in” you just have to manage more of the stuff yourself if you move from AWS. That’s kind of the point of using AWS.

        • pkaye 6 years ago

          So the lock-in due to not having a properly skilled DBA?

          • MBCook 6 years ago

            It’s a pure convenience thing.

            • ohyeshedid 6 years ago

              I wouldn't call that lock-in.

    • cyberpunk0 6 years ago

      It's very possible to resist

      • wahern 6 years ago

        It's one thing for an engineer to resist (or resist in principle). It's another thing entirely for an organization to resist. One team taking the expedient route can wed the organization to AWS forever.

        It's how the mainframe and Windows ecosystems worked. Kudos to AWS for figuring out how to capture the exploding market for Linux- and OSS-dependent stacks.

      • Crosseye_Jack 6 years ago

        Yes and no. I will say its depends on what else you are running on AWS. As long as you don't code yourself into a corner so it could be a right ball ache switching providers moving away from AWS shouldn't be too much of a problem.

        Remember (not you but the people who made your comment dead) AWS aswell as every other provider want to lock you in.

        I say this as someone who uses AWS. But their are things that still tick me off with the platform. As an example They don't make it easy to RDNS a light sail instance verses an EC2 instance so you want to RDNS to help with that outgoing email server you want to set up its easier to pay for a micro ec2 then use lightsail for the same purpose (which comes with included bandwidth and its an outgoing email server so its not like CPU is a major issue) or use SNS, but its more beneficial to AWS for you to use ec2 or sns even if its not to you because of your end of month bill.

        Anyways my point is its possible to resist the AWS lock in with a bit of forward thinking as long as you code for the possibility that you might want to swap providers.

  • arciini 6 years ago

    Given that they're an analytics company, they probably have 2 problems with using Aurora/RDS.

    Note that these are educated guesses from their statement: "Heap’s data is stored in a Postgres cluster running on i3 instances in EC2. These are machines with large amounts of NVMe storage—they’re rated at up to 3 million IOPS, perfect for high transaction volume database use cases"

    1. Aurora charges per-request ($0.20/million). Given that analytics comes with tons of events and that they wanted servers that have up to 3 million IOPS, it can get pricey fast.

    2. RDS has database instances that have SSDs that provide "up to 40,000 IOPS" per instance in their provisioned case, which is probably not enough.

  • kalmar 6 years ago

    Hi post author here! First off, we actually do use RDS for other databases. As you point out, having a lot of the operational stuff taken care of for you is great.

    The post is specifically about our Citus cluster, which stores the analytics data for all our customers. Most of the reasons we do this have been given by other folks in the replies:

      * RDS doesn't support the Citus extension
      * data is stored on ZFS for filesystem compression
      * we get significantly higher disk performance from these
        instances' NVMe attached storage, which isn't available for
        RDS
  • ricw 6 years ago

    A couple of reasons, in this case for a Postgres instance that has 2TB of data.

    1) price: you’ll easily spend $$$$$$ on RDS. If you host it on something equivalent with native SSD you’re looking at $800 a month with better performance

    2) performance. It’s way faster and you can tune your indices, create views and make it fast and efficient in a predictable way.

    3) if you want to, you can easily migrate to a different service. We did just that two months ago, from google cloud to AWS. It gives us vendor independence.

    • plicense 6 years ago

      Curious - why did you move from google to aws? How has it been after the move?

      • ricw 6 years ago

        Google has the way better interface and configurability. They’re also cheaper.

        Reasons for moving:

        1) Google had a mean bug that dropped long standing connections within their own network, and they blamed us despite following all the guides (linger timeouts etc). So had to move anyways.

        2) The biggest reason was that we were given free credits by AWS.

  • flurdy 6 years ago

    * Cost

      Running costs: Plain EC2 DB is cheaper than RDS. RDS is instance costs plus RDS tax. 
    
      Configuration and maintenance cost: RDS obviously cheaper
    
    * Performance

      As other replies have detailed plain EC2 with NVMe etc could be significantly faster.
    
    * Convenience

      RDS wins most of the time.
    
    * Risk

      RDS wins unless heavy investment
    
    So it depends on your requirement. If high performance with massive DBs is important, then the costs of managing your own DBs may make sense.

    If a low throughput, less risky DB then your own DB may make sense.

    If a normal business use DB, outsourcing the maintenance and risk to RDS may make sense.

    • scarface74 6 years ago

      That’s a great explanation. I’m stealing that for my next life as an AWS consultant.

  • manigandham 6 years ago

    Heap is an analytics company that stores all (json) events automatically and then provides real-time queries. Their business sustainability is directly tied to how effectively they store and query this data.

    They solve it with a large cluster of Postgres server running fast disks with partitioning handled by the Citus extension, along with several low-level tweaks.

    RDS does not support these scales or any clustering. Aurora does not have parallel processing. Redshift would be fast but is very expensive and does not have the same level of Postgres features.

    There is Citus Cloud so you can get close to Heap's setup with Citus maintaining it all, but that gets pricey too.

    • appwiz 6 years ago

      Aurora announced support for parallel queries[1] for MySQL, with Postgres coming soon.

      (I work at AWS but not RDS/Aurora)

        [1] https://aws.amazon.com/blogs/aws/new-parallel-query-for-amazon-aurora/
      • scarface74 6 years ago

        It looks like as of the 20th it is official.

  • nil_pointer 6 years ago

    If you compare the hourly cost for any instance on EC2 vs RDS, it's significantly more expensive for the managed solution (75%+ more), which is to be expected. I know people who roll their own for cost savings.

    • plopz 6 years ago

      If you're going for cost savings I wouldn't think you would be on AWS in the first place.

      • zepolen 6 years ago

        Some people think AWS is cheaper than DIY.

        • mmt 6 years ago

          Those people are both correct and incorrect.

          Neither costs a fixed, single amount. Both are ranges that depend on the competence of the practitioner. For example, it's cheaper to use a reserved EC2 instance for 1 year than just on demand. Similarly, the default cost for DIY is much higher than the lower bound of the range.

          If DIY includes "enterprise" hardware, software, and/or support contracts (which arguably contradicts the "Y" in DIY), then the top end of that cost range is easily above the top end of the AWS cost range. In this way, those people are correct.

          However, if we limit it to truly doing it yourself [1] and with commodity hardware, software, and methods, that's no longer true.

          More importantly, as the GP mentioned, "If you're going for cost savings", it's the bottom end of the cost ranges that are key, and, even at modest scale, AWS is way above DIY. In this way, those people are incorrect.

          [1] There's a limit, of course. For me, that limit is commodity versus custom. It's safe to outsource server manufacturing (all the way up through assembly, initial burn-in, and even rack/stack), since, it's easy enough to replace with a different vendor. It's not safe to outsource specifying what goes into those servers or final "smoke" testing. Sometimes, even for something that's safe to outsource, like remote-hands replacement of failed parts, it may not be worth the price.

        • scarface74 6 years ago

          As an AWS certified true believer, I can attest to the fact that if you’re using an on prem mindset with a bunch of EC2 instances, AWS is always more expensive. The cost savings comes in taking advantage of AWS features in terms of mostly people.

          If you are architecting everything on AWS trying to avoid “lock-in” you’re going to move slower and pay more - the worse of both worlds.

  • dylrich 6 years ago

    Postgres RDS is missing libprotobuf-c, which is a dependency for cutting MapBox Vector Tiles if you use PostGIS/Postgres that way. A small, legitimate exception to your statement.

    https://forums.aws.amazon.com/thread.jspa?threadID=277371

    • dbetteridge 6 years ago

      I can't believe this still hasn't been fixed :/ we ended up having to roll our own mapnik geojson->MVT querying layer to work around it

  • outworlder 6 years ago

    Our case: AWS is not the only environment we support. We deploy the same PG version, same config files and scripts across all major clouds, openstack and even baremetal.

  • misframer 6 years ago

    RDS doesn't support extensions like TimescaleDB or (as someone else mentioned) Citus.

  • kornish 6 years ago

    I believe Heap is using a compressed filesystem and also highly dependent on a Postgres plugin that RDS doesn't currently support.

    • sb8244 6 years ago

      Heap uses Citus DB which is not RDS compatible https://www.citusdata.com/customers/heap

      Citus has been a huge part of our scaling story from RDS (maxed out instances and did a ton of tuning) to Citus Cloud. I can only imagine that Heap has a ton more problems than we've experienced.

      • kornish 6 years ago

        Heap’s usage of Citus predates the rollout of Citus Cloud so definitely hats off to them for their operational know-how. I wonder if they’ve considered switching — my guess is that the compressed filesystem probably results in material savings for them so that’s likely a reason to keep operating their own cluster.

        • sb8244 6 years ago

          It does probably make sense for heap to manage it themselves given it is a core competency of theirs, based on their indexing talks

  • hamandcheese 6 years ago

    Well, if your business requires running databases really well it might be preferable to run your own. If they were using RDS they would not have been able to get this level of insight into their infra.

    Maybe RDS is already configured correctly. Maybe not. But at a certain scale not having your hands tied becomes more valuable than having everything taken care of for you.

  • cpburns2009 6 years ago

    Well for one it makes it a lot easier to have a duplicate local environment (i.e., not on AWS) to test on before pushing changes into your production environment (on AWS). It also helps prevent vendor lock-in. I'll stick to managing my own database instances.

  • whalesalad 6 years ago

    Defeats the purpose? There’s a lot that you cannot do with hosted software that you can do with your own custom setup. When you outgrow the hosted version the natural progression is to do it yourself.

  • chillydawg 6 years ago

    You don't get super user access to postgres via RDS. You can't use logical replication. Plenty of other plugins don't work. For complex use cases, RDS is often a no go.

  • adrr 6 years ago

    Because I can get extremely high IOPs on EC2 without paying a fortune. EBS is slow compared to MME SSD drives. Stripe 4 MME drives together and you can have over 1 million IOPs.

  • PopeDotNinja 6 years ago

    Turning on Postgres features not exposed by the cloud version(s).

  • nasalgoat 6 years ago

    It's cheaper and much more tuneable.

    • scarface74 6 years ago

      As far as cheaper, are you including the man hours to manage servers?

      • nasalgoat 6 years ago

        Hundreds of dollars a month cheaper. Well worth it.

  • gigatexal 6 years ago

    Because RDS is expensive and I want to control my databases.

  • foolfoolz 6 years ago

    because RDS and Aurora do not have a good story around schema migrations. and there’s “maintenance periods”

AbacusAvenger 6 years ago

This is exactly why I wrote the "clockperf" tool during the time I was working at AWS: https://github.com/tycho/clockperf

At the time, we were trying to benchmark disk I/O for new platforms, but we found that things were underperforming compared to the specifications for the hardware. We figured out that fio was reading the clock before/after each I/O (which isn't really necessary unless you really care about latency measurement) and just by reading the clock we were rate limiting our I/O throughput. By switching to "clocksource=tsc" in our fio config, we managed to get the performance behavior we expected.

  • logicallee 6 years ago

    >we managed to get the performance behavior we expected.

    can you put this into roughly quantitative terms? How much of a performance hit did you remove this way?

    • AbacusAvenger 6 years ago

      I don't remember the exact numbers (this was 2011), but the overhead of using e.g. CLOCK_MONOTONIC were substantial. Under Xen, the cost of reading CLOCK_MONOTONIC was a few orders of magnitude higher than reading the TSC. I think on Xen PV it was like 500ns per read, while on HVM it was at about 2000ns-3000ns or something like that.

      I remember with 8 disks that should have been able to do 60K 4K IOPS each (early SSD models), we were capping out at 90K IOPS with all disks in parallel at a queue depth of 32 while reading from CLOCK_MONOTONIC. When we switched to TSC I think we ended up getting around 320K IOPS. Still not perfect, but we were also capped by the particular HBA we chose (which didn't have multiqueue support).

wallstprog 6 years ago

Nice article!

If you're interested in clocks on Linux, you might also find this article useful (shameless plug): http://btorpey.github.io/blog/2014/02/18/clock-sources-in-li...

  • amluto 6 years ago

    > Note that the 100ns mentioned above is largely due to the fact that my Linux box doesn’t support the RDTSCP instruction, so to get reasonably accurate timings it’s also necessary to issue a CPUID instruction prior to RDTSC to serialize its execution.

    Huh? That’s definitely not true now, and I don’t think it ever was. Linux uses LFENCE or MFENCE, depending on CPU.

    • _msw_ 6 years ago

      Using CPUID as a serializing instruction before RDTSC{,P} is a bad bad thing to do inside a virtual machine on Intel processors. CPUID will cause a VMEXIT, and the CPUID instruction will be emulated. The Intel Software Development Manual Instruction Set Reference gives good guidance on using MFENCE and LFENCE as required.

      https://software.intel.com/sites/default/files/managed/39/c5...

      • amluto 6 years ago

        Linux has mostly stopped using CPUID to serialize at all. When full serialization is needed, we use IRET now. In the future, we could optimize a bit by writing to CR2, except on Xen.

Johnny555 6 years ago

And, EC2 does not live migrate VMs across physical hosts. I couldn’t find anything explicit from AWS on this, but it’s something that Google is happy to point out.

Is it a good idea for a production database to depend on a feature not being used when the vendor hasn't said that they don't or won't use it? They may very well live-migrate when convenient, but just don't expose that functionality to customers since they don't want customers demanding it.

  • KayEss 6 years ago

    With the instance types they're using the migration isn't really an option because the point of the i3 instances is the locally attached NVMe SSD disks where the database files are.

tofflos 6 years ago

Is AWS NVMe still ephemeral and how do you deal with it? What happens if a machine, or several, reboots?

  • _msw_ 6 years ago

    Hi, an engineer on the EC2 team here.

    The "ephemeral" term is a legacy. Unfortunately it is part of the EC2 API for the block device mapping [1] of the "classic" instance store interfaces on the Xen platform. I don't know exactly when we stopped using "ephemeral" in our documentation, but I think it was with the introduction of EBS around 2008.

    The "ephemeral" term confuses a lot of customers, and that's why we stopped using it. Data written to local storage is not transient, fleeting, or short lived. By 2010 we had transitioned to using "instance storage" in the documentation [2], which included a big note about how the data remains if an instance reboots for any reason (planned or unplanned).

    Still, there is a misconception that data on local instance store volumes (both the more "classic" HDD or SSD volumes that are virtualized by Xen, as well as the new generation of local NVMe storage) could vanish due to this vestigial term that lingers in the API. Many customers, as well as services like Amazon Aurora [3], build highly durable and available systems on local instance storage.

    [1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-de...

    [2] https://web.archive.org/web/20111113011016fw_/http://docs.am...

    [3] https://www.allthingsdistributed.com/files/p1041-verbitski.p...

  • kalmar 6 years ago

    Post author here. It's ephemeral, yes. It survives reboots, so that's not a problem. It doesn't survive instance-stop, so if a machine is being decommissioned by AWS we do indeed lose its data. As for how we protect against it, the main thing is replication: the data is stored on more than one machine. If we lose a machine for whatever reason, the shards from that machine are copied from a replica to another DB instance.

    • _msw_ 6 years ago

      As local NVMe storage does not have any interaction with the "classic" block device mapping APIs (the storage shows up as a PCI device, the same way that a GPU or FPGA does, and it doesn't matter in any way how the block device mapping is set up), there is no reason to use "ephemeral" to describe it.

      Said more directly: no, it is not ephemeral. It is local storage that is tied to the life cycle of the instance.

  • cornellwright 6 years ago

    Reboots are actually fine as ephemeral data will persist through a reboot on an EC2 instance. Your question is still valid though in case of halting and how you deal with it is specific to your application, but you have to be able to handle all the data on that ephemeral disk disappearing without warning.

    One way in the case of a database could be a second EC2 instance configured as a read replica in a different AZ.

    • kalmar 6 years ago

      Back when we ran the Citus cluster on EBS, we lost some EBS volumes as well. This manifested as disk not responding, followed several days later by an email from AWS with the subject Your Amazon EBS Volume vol-123456789abcdef telling you the disk was lost irrecoverably.

      But yeah, you need to be ready for your disks to go away no matter where they are: ephemeral, EBS, physical, whatever.