Launch HN: PullRequest (YC S17) – On-Demand Code Review

90 points by lyal 7 years ago

Hi! I am Lyal Avery, founder of PullRequest (https://www.pullrequest.com) - we’re currently in the YC S17 batch. PullRequest is offering code review as a service.

We built PullRequest to help developers. After waiting several days for feedback on a pull request while a colleague was on vacation, I knew there had to be a way to improve this process. Our mission is to improve code quality and save time for dev teams. We combine static and linting tools with real on-demand reviewers to help augment your current code review process. Dev managers like extra coverage, but our real intent is to free up developers to make better software more efficiently

We’re onboarding experts across a lot of different languages for this reason. Sometimes teams might only have one person working within a given framework/language – it can be difficult to get objective feedback before shipping to production if you’re working on an island.

All reviewers sign NDAs to protect your IP. We start with surface level reviews – complying with framework or language standards, algorithmic work, performance or other questions. Since our reviewers continue working on the same projects, they will also gain context for deeper reviews.

Looking forward to hearing your thoughts and feedback!

senko 7 years ago

I am skeptical that this can work well.

Having deep understanding of the code in question is essential for a good code review. Not just the code under review, but the wider scope of the project. This helps spot architectural problems, inconsistencies, unearth hidden assumptions or assumption breakages, and the like.

Reviewing the code as a drive-by loses all of those benefits and boils down to focusing on the code at hand, coding style, nitpicks, and implicitly assuming the code fits well with the rest (enforcing consistent coding style and pointing out code smells is certainly useful, these however can be automated to some extent by linters and services like CodeClimate).

I have been a reviewer in hundreds of pull requests, and reviews I've done where I have been intimately familiar with the existing code base were consistently much better than the reviews I did as an outsider to the project - even when, knowing this, I spent a lot more effort on the reviews as an outsider.

The founders seem to recognize this (it's mentioned in the TC article) and mention pairing up reviewers with the same companies, but this IMHO will not be enough, unless these reviewers are basically on retainer and work regularly, and often, with the same company.

I'd love to be proven wrong, so good luck PullRequest team!

  • tedmiston 7 years ago

    There are plenty of ways to significantly improve a codebase through review besides deep sweeping architectural changes. As you know, the goal of a review varies widely depending on how big the project, its maturity, how many people contribute to it, etc.

    Things like not know about certain shortcut functions in the standard library, improving design pattern usage, docstrings, and otherwise improving modularity, decomposition, cyclomatic complexity, consistency, etc. Code Climate goes far but doesn't do all of these things as good as an experienced engineer.

  • tyler_mann 7 years ago

    Thanks for the comments/viewpoint. This is definitely something that we are looking to address as you mentioned by keeping the same reviewers on the project over time.

    Although for teams that have a great reviewer like you mention with context, we think that PullRequest offers an extra set of eyes instead of a full replacement. We hope that we can hopefully save you time and add value by catching all of the mistakes possible up front and leave the architecture up to you.

git-pull 7 years ago

This looks like something that could catch on, especially if you're already compartmentalizing projects into libraries, that alleviates a lot of hesitation in sharing a codebase. It's good to see that NDA's are involved as a layer of protection.

There are things that a human can suggest that computers can't. Such as a refactoring suggestion.

Here are a few ideas:

- Consider adopting a standard like EditorConfig (http://editorconfig.org/) for reviewers to have compliant indentation out of the box

- For Enterprise packages: perhaps there can also be an opportunity to sub-contract out features and write tests?

- Consider experimenting internal CI tools (like as done in open source projects) to scan for obvious/low-hanging fruit automatically

- Scanning for / suggesting package updates

- Provide QA / audit for a large open source project for exposure

- Security auditing

Here are things that are good to hear:

- Static / Linting: things like vulture, flake8, etc. seem like a nice thing to stick to. It's good that these linters have configuration files to it

  • lyal 7 years ago

    Thanks much! A lot of good notes; some initial reactions:

    Completely agree re: editorconfig. Very necessary to prevent bikeshedding. We're actually building a dedicated review IDE.

    Part of our roadmap is to offer open source projects code review -- not just for exposure, but to work with reviewer standards on.

    We're definitely interested in security review.

lozzo 7 years ago

I am very skeptical about this service. Aside from cosmetic changes (which should be automated anyway) code reviews are better served by people who know intimately the problem we are trying to solve. Some code could look pretty neat (and pass the review) but still overall would be a mistake to have it.

  • lyal 7 years ago

    Appreciate your view. I think for teams with strong code review practices, we make sense as extra eyes rather than full replacement.

    Edit to expand:

    We also believe that reviewers attached to projects will gain context quite rapidly. We had a reviewer catch an edge case bug for one of our teams that had gone unnoticed by internal review. Economic side of this would have been large... but was only possible through the context they gained in previous reviews.

danpalmer 7 years ago

Roughly speaking, I think there are 3 aims for code review:

1. Style/consistency, re-use of existing code, utils, etc.

2. Architecture/design, how does this fit into the rest of the codebase, scaling concerns, how will the deploy work, will this have race conditions, etc.

3. Knowledge sharing with other members of the team.

Currently, it looks like this would satisfy half each of 1 and 2, but will miss the (possibly large) amount of context that people working on the project have. To be honest, I don't know how you solve that. How does a reviewer who lacks knowledge about the codebase spot a common pattern and know that another dev abstracted that out into a util a few weeks ago, for example.

I also wonder what could be done to address (3). I've seen the team I work on go from a place where everyone could review everything to a place where I can't review all the code that goes live, and particularly after time off, I can't really catch up. I'd love to see some sort of automated changelog of useful notes on what has changed. I'm not sure if this is possible, but summarising merged PRs, highlighting config changes, showing new utilities that have been added, etc, would be quite valuable.

  • jbrooksuk 7 years ago

    To me, code style should not be part of the review process. This should be automated away https://blog.alt-three.com/code-reviews-are-not-about-coding...

    • danpalmer 7 years ago

      I completely agree, we actually use linters to automate a lot of this, but there is a class of things that linters have a hard time with, like naming, or re-use of existing design patterns or utilities.

      • tedmiston 7 years ago

        Especially true in dynamic languages like Python.

  • LrnByTeach 7 years ago

    > 3 aims for code review:

    > 1. Style/consistency, re-use of existing code, utils, etc.

    > 2. Architecture/design, how does this fit into the rest of the codebase, scaling concerns, how will the deploy work, will this have race conditions, etc.

    > 3. Knowledge sharing with other members of the team.

    > Currently, it looks like this would satisfy half each of 1 and 2

    In my opinion, Even a service that address the 1) and 2) add lots of value for so many IT projects (out side of cutting edge Startup companies )

    • lamby 7 years ago

      > Even a service that address the 1) and 2) add lots of value for so many IT projects

      Especially when they currently have no existing code review.

  • lyal 7 years ago

    Thanks - excellent notes regarding knowledge sharing. We've kicked around the notion of automatically generating reports from reviews for sharing around the contents of the code review (and underlying changes).

    Could be fascinating to highlight config/code/approach changes on each pull request. Could actually help velocity across entire teams.

traviswingo 7 years ago

Seems like a good idea, but I wonder about the true quality of the review? In my experience, only a true team member who's familiar with the project (i.e. has actually been working on it) can provide a quality code review. Beyond that, they're just looking at ways to optimize blocks or find weird bugs in non-breaking recursive lines...

  • lyal 7 years ago

    Great insight. Definitely something we are tackling. For once off reviews on an individual pull request, it can be challenging to do anything but surface review. As a result, we're building some summarization tools to help provide context rapidly to a reviewer. Over the lifetime of a project, we have reviewers assigned to the same projects so that they build up context, in the same way that team members do.

    • traviswingo 7 years ago

      That's great :). Most problems worth solving aren't easy by any means. This could be a huge market if you do it right!

tedmiston 7 years ago

I'm a huge fan of static analysis and code quality, and am really excited to see where this goes.

It would be nice to see a demo video before giving full access to my private repos.

> Pricing > Standard starting at $49 per month*

> * Billing is dependent on amount of meaningful change per month. $9 per user per month for static analysis.

This metric is pretty unclear. Does this mean hourly billing based on reviewer time? Are there tiers or an upper bound? Is there a different tier for open source? Is the pricing different for surface vs deep reviews?

As one of those weird people that thinks doing code reviews and managing code quality is really fun, if I wanted to become a reviewer, what's the vetting process like?

Can you elaborate on, besides involving humans, how the underlying service is different than Code Climate, Codacy, etc?

P.S. Found a small bug on your dev signup form which I reported on Twitter. It would be awesome to be able to help review PullRequest using PullRequest ;).

  • lyal 7 years ago

    Great catch! Thanks, replied on twitter - we're dog fooding our own product, so as a reviewer, you'll definitely see our code in the review queue.

naturalgradient 7 years ago

My suspicion is this:

All the issues someone with no familiarity of the code base or the problem could typically uncover are things that are prone to be automated away by software in the long run (or are already in the process of being automated).

  • lyal 7 years ago

    I think that's a natural place where our tooling will evolve to - a lot of things that aren't caught in an automated way, after being trained with real review, will. There's no replacement for the human component of review though, and we believe that by allowing reviewers repeated access to the project, that they will gain the context necessary.

jlamberts 7 years ago

I would love this as an individual when learning new languages on my own projects. I find it really hard to tell if I'm actually doing things the "right" way without talking to someone more experienced.

  • hitgeek 7 years ago

    yes this is a good thought. I wish they had a free tier for this purpose. 1 review a month or something.

    • lyal 7 years ago

      We would love to offer one review a month - unfortunately, because there are humans on the other side of the review, it's harder to do this than for a straight SaaS operation.

      We'll definitely have free tiers for our static and instrumentation product though.

acconrad 7 years ago

Awesome idea, just signed up to help out and review code! Is there an incentive / gamification system to reward strong reviewers so their reputation increases as they provide good feedback to companies?

  • lyal 7 years ago

    Thanks! We're still early in our life cycle -- but on the roadmap is the creation of reviewer profiles (as an optional feature). This'll allow us to highlight strong reviewers, their projects, etc.

    Incentives and gamification are definitely there as well. We want to bonus people for doing thoughtful code review.

    • kevinSuttle 7 years ago

      This is a big opportunity to create an entirely new specialized role. Could be very lucrative for people to make names for themselves.

      • lyal 7 years ago

        Completely agree! The idea of a 10x reviewer has been a big part of my thinking on creating this company (and the movement of specialization for review out of big tech companies into smaller engineering teams).

        • kevinSuttle 7 years ago

          Great idea. Admittedly, I didn't get it at first, but I'll be watching your progress now that I see it. Good luck!

josh_carterPDX 7 years ago

All reviewers sign NDAs to protect your IP.

How does your company back this up? What happens if one of your Developers violates this? Will you pay for the legal fees?

  • lyal 7 years ago

    We're still exploring the landscape on this. At the core, we're hiring reviewers in jurisdictions that we have presence for (currently North America), and they are signing a 3 way agreement with the company under review. This offers the same level of protection as a traditional consultant in terms of protections.

bberenberg 7 years ago

Do we expect them to provide feedback like "this algorithm is not right because XYZ" or "I fixed this algorithm to work correctly". Those are very different levels of service and I think defining exactly what someone should expect will really helps set expectations.

I also think that this seems absurdly cheap, and I can't imagine it scaling with quality reviewers. Would love to be wrong on this one.

  • tyler_mann 7 years ago

    Thanks for the feedback! I think you are right that we should add more expectations/FAQ section to our website. We expect the feedback to be more of the former, suggestions and comments but still up to the code author to correct and implement. Re: pricing: we are still working out the details of pricing but we believe that we should be able to get quality reviewers at this price. We expect our reviews/custom review client over time to help increase our efficiency to drive costs down.

  • lyal 7 years ago

    To clarify, our pricing page is probably too confusing for folks at this stage -- we aren't saying that we can provide unlimited reviews for $49. Rather, teams and individuals can expect costs to scale up from that baseline.

redm 7 years ago

I like this idea, it seems useful for all the ways described. My skepticism comes from the reviewers themselves. I think they will have a hard time attracting and keeping top talent who can provide high-quality reviews as such talent will want to be creating code, not only reviewing it. I'm not sure how they would resolve this.

  • lyal 7 years ago

    Review offers flexiblity that other forms of contracting don't - there's no project management, client negotiation, etc.

  • tedmiston 7 years ago

    The impression I had is that reviewers are part-time contractors vs full-time employees. Probably developers creating code all day long.

edraferi 7 years ago

Very interesting. What are your thoughts about independent developers using this as an education tool? It would be really nice to get external input on projects I'm using to teach myself new technologies and patterns.

  • lyal 7 years ago

    We have a few folks that have signed up for just this! I think it's a neat concept.

    • edraferi 7 years ago

      Took another look at the pricing page and saw "billing is dependent on amount of meaningful change per month. $9 per user per month for static analysis."

      Does that mean I could expect to pay < $49/mo for a learning use case? These projects don't move to quickly, so I envision using static analysis extensively, then getting a more complete review 1-2 times a month.

      Might be worth clarifying the pricing description.

      • lyal 7 years ago

        Thanks for the insight. We'll definitely be clearing up the pricing packages.

pj_mukh 7 years ago

I am very interested in a product like this even if for just individual use. Your pricing says $49/month depending on "meaningful" changes suggested? What does that mean?

Another good idea is for new programmers in a production environment just having an eye over their shoulders making sure they aren't making rookie language mistakes (simpler ways of doing etc.). Letting official company engineers focus on architectural/roadmap related issues.

This, to me, would be the fastest way to learn as well.

  • lyal 7 years ago

    We price based on the amount of review being done per month. $49 is the base threshold; individual reviews can vary based on the amount of code being reviewed at once. We're figuring out the exact pricing model for individuals and teams that'll be easily communicable.

overcast 7 years ago

Dang. One of the most painful things to do in this field, is dig through someones code. I don't even like figuring out MY old code. I'm surprised reviewers are voluntarily submitting themselves to this torture :D Cool program though, hope it takes off for you.

  • lyal 7 years ago

    Thanks. Different people like different things - I absolutely love reviewing code. We're hoping that by letting people focus on what they love doing, teams everywhere will be happier and more productive.

    • overcast 7 years ago

      awesome! dope domain too.

fergie 7 years ago

What are the benefits of reviewers over automated testing?

My workflow (which I believe is pretty standard) is:

* Write code

* Verify that tests pass locally (including stylistic tests, linting)

* Submit pull request

* Pull request triggers build and tests on Travis

* If all tests pass on Travis, code is stylistically and functionally correct

* Merge pull request

How can human reviewers improve this workflow?

  • sz4kerto 7 years ago

    > If all tests pass on Travis, code is stylistically and functionally correct

    Passing tests don't prove that

    - tests are covering all functional criteria required

    - the code doesn't add 'technical debt', i.e. structural problems that will have to be refactored/worked around in the future.

  • paradite 7 years ago

    - check for best practices

    - check for whether external or downstream services would be affected (in the absence of complete e2e testing)

    - check for coding conventions and standards that are not enforceable by linters, such as naming conventions, code structuring, positive/negative testing, effective usage of helper methods

    - check for typos

  • yessql 7 years ago

    I can write sort algorithms that are nlog(n) or n^2. Both pass functional tests.

    I can write code that reinvents the wheel, instead of using the solid library I've never heard of.

    I can do things in non-idiomatic ways that testing and linting will never catch.

  • maaaats 7 years ago

    Those things have nothing to do with a PR, other than maybe a PR being a good way to say "this is done, let's automatically verify it".

    A PR is about showing the rest of the team the changes so more than one person knows how stuff work and what's going on in the code base. And for the rest of the team to give feedback on stuff like how the feature was architected, not to nitpick on indenting.

    • icebraining 7 years ago

      A PR is about showing the rest of the team the changes so more than one person knows how stuff work and what's going on in the code base.

      But that's not really applicable here, right?

fitznd 7 years ago

Great idea! I agree that $49/mo is a bit steep if targeting startups. Though at the same time, each PR could easily take an hour to review so it could get time consuming fast. Is there any free trial?

  • icebraining 7 years ago

    To me, $49/m seems impossibly cheap for a service that requires quite specialized human skill, not to mention the vetting and risks inherit to handling IP from other companies. And I come from one of the poorer EU countries, not from SV.

    • lyal 7 years ago

      Yes; pricing starts at 49$ per month, but that would be for a single review.

  • lyal 7 years ago

    Yes; because there is a human reviewing, we have to start at a pricepoint that makes sense. Saving time and lowering risk is worth getting really great reviewers - to do that, we need to pay them a great bid per review.

    We offer a free review to companies interested in trying us out. Shoot me an email (lyal@ our domain name).

tcholas 7 years ago

Congrats on building this product, guys. This tool is very interesting for startups that have only one developer and freelancers. However, a $49/month pricing may be quite expensive for these people.

  • lyal 7 years ago

    Thanks! Pricing is a place we're working out details. We'd like to offer lower tiers for individuals/freelancers in the future for our static and automated tooling.

    • thruflo22 7 years ago

      From a business POV I'd wonder whether the RainforestQA model of much higher pricing ($10,000 per month) with a focus on highly parallel resource deployment (review a lot of code quickly) might be a better strategy.

hayd 7 years ago

Do you pay reviewers? How much ?per review/hour, how's that work?

  • lyal 7 years ago

    We're trying out a few different models right now. Our goal is to make sure that developers are getting great pay and want to work with us!

zazpowered 7 years ago

This could work well for Ethereum smart contracts

  • lyal 7 years ago

    I agree - interesting landscape to play in.

koolba 7 years ago

You should edit the submission description to make https://www.pullrequest.com/ a clickable link. I've seen that done for other Launch HN submissions.

  • lyal 7 years ago

    Thanks! Updated.