jonathan-adly 13 days ago

I have something similar[0] but with a different philosophy. Basically, a docker container running and you can execute code against with ability to set timeouts, auto install and uninstall dependencies and a bunch of other cool stuff.

The pain point of all of this is dependencies and making sure someone doesn’t use your infrastructure to DDOS other folks.

0. https://github.com/Jonathan-Adly/AgentRun

  • mlejva 13 days ago

    Very cool project! I like that you support REST API. That's something we don't have yet. We actively decided to use Firecracker microVMs [0] instead of containers for hardened security.

    > making sure someone doesn’t use your infrastructure to DDOS other folks Agreed. And crypto mining!

    > The pain point of all of this is dependencies Would love to learn more. Can you elaborate?

    Another challenge is how to make the whole product cost efficient for our users. You don't want them to pay for idle sandboxes running on E2B but only when they actually need them for serverless execution. Firecracker and its snapshots are really helpful with this.

    [0] https://github.com/firecracker-microvm/firecracker

yikes_awjeez 13 days ago

i directed the intrepid hackers over here for discussion, but if you'd like some examples of cool stuff people have been doing with it already, have a look at https://interpreter-weekend.devpost.com/ :) (bonus: the catch-up material with a bunch of little snippets to build on that dropped mid-hack; https://noteshare.space/note/clv0f272x1100201mw12skfhf4#xhDF... :3)

  • yikes_awjeez 13 days ago

    not all of said snippets got used so go grab them and do cool stuff if this thingy sounds cool to you :D

fdcaps 12 days ago

Is there an open source solution which I can self host without paying anything?

E2b is doing great stuff but it is way too expensive and we users do not like that. What is a solution for that?

  • mlejva 12 days ago

    We'll have nice and easy support for self-hosting soon-ish.

    In the meantime, everything is open-source and the infra is codified with Terraform. GCP should have the best support now. If you want to dig into it, we'd love to give you support along the road so we can improve the process.

    Our infra repo [0] is a good place to start. Once you have E2B deployed, you can just change E2B_DOMAIN env var and use our SDK.

    Feel free to email me, join our Discord, or open an issue if you have any questions

    [0] https://github.com/e2b-dev/infra

    • fdcaps 12 days ago

      That is amazing. Did not know that. Would be great to have a very good self-hosting documentation. The open-source welcomes e2b as the industry standard sandbox.

      There is another open source repo using Docker. Is there a benchmark for speed, security, reliability, scalability like an evaluation with llms ? Thanks a lot

sandruso 13 days ago

Great, happy to see progress in this space! I've built demo for the same thing with same use-case couple day ago. Execute untrusted JS code.

Empowering user apps with code is way to go.

  • mlejva 13 days ago

    Thank you. Totally agree! Another interesting feature of code interpreting is that it improves the reasoning of LLMs

mlejva 13 days ago

Hey everyone! I'm the CEO of the company that built this SDK.

We're a company called E2B [0]. We're building and open-source [1] secure environments for running untrusted AI-generated code and AI agents. We call these environments sandboxes and they are built on top of micro VM called Firecracker [2]. We specifically decided to use Firecrackers instead of containers because of their security and ability to do snapshots.

You can think of us as giving small cloud computers to LLMs.

We recently created a dedicated SDK for building custom code interpreters in Python or JS/TS. We saw this need after a lot of our users have been adding code execution capabilities to their AI apps with our core SDK [3]. These use cases were often centered around AI data analysis so code interpreter-like behavior made sense

The way our code interpret SDK works is by spawning an E2B sandbox with Jupyter Server. We then communicate with this Jupyter server through Jupyter Kernel messaging protocol [4]. Here's how we added code interpreter to the new Llama-3 models [5].

We don't do any wrapping around LLM, any prompting, or any agent-like framework. We leave all of that to our users. We're really just a boring code execution layer that sits at the bottom. We're building for the future software that will be building another software.

Our long-term plan is to build an automated AWS for AI apps and agents where AI can build and deploy its own software while giving developers powerful observability into what's happening inside our sandboxes. With everything being open-source.

Happy to answer any questions and hear feedback!

[0] https://e2b.dev/

[1] https://github.com/e2b-dev

[2] https://github.com/firecracker-microvm/firecracker

[3] https://e2b.dev/docs

[4] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...

[5] https://github.com/e2b-dev/e2b-cookbook/blob/main/examples/l...

jamesmurdza 13 days ago

Awesome!

  • mlejva 13 days ago

    Thanks, James! We should record a video on code interpreter when you're around!