Feb 10 2026
Building Android apps in Meta’s monorepository with Buck2
Listening time: 37 mins

Guest(s): Navid Qaragozlou, Software Engineer; Joshua Selbo, Software Engineer; Iveta Kovalenko, Software Engineer

How do you keep Android build times under control when your codebase spans tens of thousands of modules and millions of lines of Kotlin? In this episode, Pascal Hartig talks with Navid Qaragozlou, Joshua Selbo, Iveta Kovalenko from Meta's Android Developer Experience team about the technical strategies that help Meta engineers stay productive at scale.

They discuss approaches like source-only ABIs and incremental compilation – clever solutions that have helped Meta tackle the challenges of building fast in a monorepo, as well as what you can do to keep your builds fast with Buck2.


Mentioned on the podcast:


Transcript:

Pascal: You will probably all know this one: XKCD 3 0 3 depicts two developers sword fighting on their office chairs with the caption: The number one programmer excuse for legitimately slacking off: My code's compiling. I'm not gonna lie. It can be really nice to have a reason to get up and make yourself a coffee but there is probably also no more efficient way to murder someone's productivity than slow build times. This was precisely the worry across the industry when Google announced Kotlin asked the language, replacing Java in Android development to discuss how we avoided falling off the productivity cliff and how you can use some of those improvements for your own builds. I have three exceptional guests, Iveta, Navid, and Joshua. Welcome to the Meta Tech Podcast.

Joshua: Thank you for having us.

Iveta: Hello. Glad to be here.

Pascal: All right, so now let's introduce all the different voices we've just heard. Iveta, can we start with you? How long have you been at Meta and what did you do before?

Iveta: Okay, so I've been at Meta for almost three years now. I started as an Android engineer on WhatsApp and later I moved to the Android Developer Experience Team. Before joining Meta, I worked as a product engineer for startups, so everything from dating apps to a soccer game.

Pascal: Fantastic. I hope we get a chance to talk a bit about WhatsApp later because that's still a bit of a special case with Meta, which I think could be quite exciting. But for now, Joshua, can I pass it on to you?

Joshua: Hi, I am Joshua. I joined Meta in July of 2016 after doing a summer internship with Meta in 2015. So originally I worked on the Android Messenger app, kind of on the product side, on the audio and video calling. At some point I switched over to Android developer tooling and that's where I've been ever since.

Pascal: Just a few months before me, and I also went from UI Framework for Android to DevX a very similar path there. Navid, last but not least, how long have you been here and what did you do before you joined the DevX team?

Navid: Hey this is Navid. I joined Meta about four years and four months ago right in the middle of pandemic and when remote working was enabled and people were moving to remote I've been with the DevX Android DevX since then. Before joining Meta, I was working with Amazon on both retail and AWS side.

Pascal: Fantastic. So now let's talk a bit about your team. I think most people listening to this will probably have an idea of what encompasses DevX, but there are so many different ways of framing this. Could one of you explain the mission statement of your team?

Joshua: Absolutely. So the way I like to think about our team's mission is that we try to make Android developers in the company, we try to make their lives as painless as possible. For us, our customers are other Android developers within Meta. Our goal is to make them as productive as possible and to make sure that they can make their changes quickly, efficiently and as safely as possible. So that means some of the things that we work on, of course, our build speed, which is the focus of what we're talking about today. We also work on things like the ID space, Android studio integration. We also work on continuous integration, making sure that we select the right Android test, the right android builds to run when people publish their changes. Another focus for us is making sure that developers have access to the latest frameworks. So we focus on things like keeping the Android SDK up to date in our repo making sure people have access to the latest Kotlin version, latest jet back compose, these are some examples.

Pascal: That is honestly a huge space that you own there. Given how many apps, how many developers we have who rely on all of this every single day, and how easy it is to break something, even just an upgrade of a patch version, I think can probably have like a huge tail of changes that you need to ensure are made before you can actually make that switch.

Joshua: Absolutely. It can be a big challenge when there's some breaking change across let's say a new Android X library version and suddenly we need to update 500 call sites across 20 different apps. So certainly there's a challenge there.

Pascal: I can't imagine. But now let's focus on one of the areas that you just mentioned, and that is build speed. Can you talk a bit about what makes it particularly challenging to build Android apps at a reasonable speed?

Joshua: Definitely, I would say that the biggest challenge for us is simply the size of our code base. So just as an example, the Facebook app itself has some tens of thousands of Android modules that need to be compiled. Not to mention the native code and everything else. We also have some 10 million plus lines of Kotlin across the entire code base. So certainly just the scale and the amount of code that we need to compile is one of the biggest challenges. So of course, avoiding compilation, so making sure that developers have a high cache rate. Making sure we have cache warmers set up so that developers avoid having to compile as much as possible is a focus, but when developers do need to compile some code, we're going to talk about the different optimizations that we have to make sure that that's as quick as possible.

Pascal: Got it. So let's talk a bit about Buck, because that is actually the tool we use here now almost exclusively for building our Android apps. And you've recently had a big release of the version 2.0 in open source of the Android tool chain for Buck 2. So back to, I think came out in April, 2023. We had an a whole episode about this, but what's was new as part of this recent release of the Android tool chain for it.

Navid: Buck 2 core itself is, we can consider it as a build system, which is a language agnostic. It doesn't inherently have any capability to compile or build anything. The way this functionality are brought into Buck is by a layer on top of the bot which is called, rule layer. It comes with a predefined set of rules that need to be implemented to be able to compile and build different ecosystems or languages. What we released this year as part of Android build tool chain is the ability to compile Java and cut link codes convert them into Dex and assemble them into APK. So everything that is needed to enable Buck build an Android application, we open source that part of the tool chain.

Pascal: Just very concretely, when you talk about the tool chain, what's, what does that actually involve? I know a lot of the kind of glue within Buck, as you say, it doesn't really have any insights into particular tool chains on its own or programming languages. Do you release some Starlark code? Is it what? What is actually part of this open source bundle you've put out?

Navid: So the, the rule layer, as you said, um, right, um, is written in Starlark. So that Starlark rules, um, let the people define what they want to do, and then we have to implement their rules to do whatever they wanted. Let's say we have a Kotlin library that the user would define the definition of their Kotlin libraries, the names, the dependencies, all the plugins, everything. And then we need to write a Starlark code that tells Buck, how to build that library and generate the results. So part of it is write the annotation processors, compile it, then create a jar file. Now combine the jar file, convert them into deck. So all this stuff has to be written in Starlark. Part of the tools are at the heart of these tool chain are the compilers, like Kotlin compiler, Java compiler. But there are other tools that we use. Anything from compressing artifacts, converting them into decks, right, R8 or whatever that takes into account, merging, manifest files. Anything that does the job is part of the tool chain.

Pascal: Got it. And just for people who don't know, Starlark is I think a dialect of Python. It's not necessarily a subset, but some sort of dialect of it. I think that came originally out of Google for their build system Basil or Blaze, depending on which flavor of the door you're looking at. But most people will just kind of recognize it immediately as something that is very Pythonesque.

So for open source, what do you think people can actually get out of this now that it's available on GitHub?

Navid: So first and foremost, as we develop this layer, this tool chain, we learn from other build systems like Gradle, for example, or Basil. And now we are paying it back, but making it available to the people that use other build systems to also learn from our implementations and the way that we do things hopefully that becomes helpful. It also enables people who want to consider migrating their build system to start testing with Buck and see how it performs compared to whatever they have or if they, it can be a better answer for the requirements. Also we hope to get some community contribution back to extend the capabilities of our tool chain beyond what is needed or implemented at Meta. Last but not least we have upgraded or improved our interaction model with Jets Brain. Now by making our tool chain available to public it's easier for developers that it. Jet brain to also test their changes against our build system in case whatever they're developing or changing could break our assumptions.

Pascal: And this is purely an assumption, but given that we have a lot of Android projects on GitHub as well, it is probably quite helpful that we now also have the tool chain there. So we could, for instance, integrate this with a CI pipeline and run the exact, same build steps that we have internally on our external build to validate that a PR, for instance, actually works as intended without this additional round trip of somebody importing it, then oh, some errors were spat out because this doesn't quite comply to our rule set that we have internally and going back and forth.

Navid: That that's absolutely true. Yes.

Pascal: Okay, so back to you. As we have discussed in the episode that came out two and a half years ago, is basically just a system that maintains this directed basically graph of dependencies and then generates some outputs depending on whether some input of this graph changes. So what specific optimizations have you made to this so that you leverage the specifics of the underlying language; in this case, obviously Kotlin.

Navid: Yes. Um, very interesting question. In general, we can speed up build in three ways, um, by doing things faster, doing less things, or doing more work in parallel and that's what Buck is inherently good at. For example, if we can flatten the build graph and reduce its depths we can do more things in parallel and speed up build. For example, when it comes to Cosmi to build a library, we need the public interface of it dependencies, also known as ABI. And the ABI of a dependency can be generated from its binary after conflation by scrapping the non-public part of the binary, which makes it simpler and smaller. What we realize is that it can also be generated from its source only, its library source only with some limitation. We call it Kosabi, which stands for Kotlin Source Only ABI, and it's a successor of Jasabi. They both are in Meta.

Pascal: Okay. So you talked about one of these core optimizations you can make is source only, ABIs. Can you expand a little bit on this? What actually happens when we generate one of those source only ABI artifacts?

Navid: So in the dependency graph of a build, in order to compile a Kotlin library, we need to have all the dependencies. So we need to wait for the dependencies to compile and generate the results. Then we pass those dependencies to the library to compile it, right? But it doesn't have to be like that. There are many things that we can. Understand or Yes, by just looking at the, uh, source code of the library itself. Because every dependency only needs the public interface of a library, like public classes, public matters, everything that the library exposes and what compiler uses is, in order to build, um, a library, we need the public interface of the dependencies and what compiler use is a binary version of that interface called ABI. That ABI can be generated by scrapping of the non-public part of the library to make it simpler and also avoid, rebuilding when internal things change.

But also we can guess or with the enough accuracy predict and generate the ABI of the library by just looking at its source code. That's what we call source only ABI, because it's the ABI generated by looking into the source code of the library only and not using its dependencies.

Pascal: Got it. So you can basically build it in isolation from anything else. And I can imagine the way this works, which probably also speed things up, is by just going through the file and look for everything that's kind of internal or private that doesn't expose anything to other consumers of that particular module or class.

Navid: That's right. With that optimization we can start compiling a library before it dependence is actually compiled. That can be passed to other dependence of this library that only need the public ABI to start compiling. So with that trick, we flatten the build graph and make it more Paolo. And with that comes a huge business speaking.

Pascal: Right, and I guess what's really important is that the way Buck works is it's not a local only build system, but we do a lot of stuff via what we call remote executions. So we upload it to a different server and that can perform the compilation. So you're not even limited by your local core, which have obviously expanded quite dramatically. But the more you parallelize, the more you can basically fan out to the different build machines that we have somewhere in the cloud, which also speeds things up. So paralyzation is really a massive gain in performance that you can potentially unlock. So for this source only ABI mode, are there any restrictions or particular rules that developers need to follow to actually make use of that optimization mode.

Navid: Yes, we cannot predict or guess everything correctly. But we can do most of it. By the name, this source only ABI has a name. Because we, we created that, we call it Kosabi, which stands for Kotlin Source Only ABI. And that came after our optimization for Java, which was Jasabi, it follows the same pattern. But yes, we have a set of rules that need to be in place. I mean, the set of limitations that we have to satisfy in order to be able to generate the ABI from source only. We call them applicability rules and we have tools, internal tools that would check if the library passes all check marks and can be adopted to generate ABI from the source only. So for example, Kosabi needs the types using the file to be explicitly imported in that file. So if, if a file where a constant class uses an interface which is defined inside the class, it needs to import that interface explicitly. Otherwise, by just looking to the source code, we won't know what is that interface or what, what type is it? Also wildcard imports are not acceptable so we cannot do imports with the star. We have to explicitly name every class that we import. Another example, an explicit return type need to be added to a non-pro function or property, whether it is public, protected or internal. The return type need to be explicitly added.

And also when we, um, we have some limitations on the order of superclass and interfaces, when we define the inheritance list, the superclass needs to go first and then the interfaces there, there are some limitations. With, with these limitations in place, if the code doesn't use any of these patterns that Kosabi cannot handle, then it's eligible to create ABI from source only.

Pascal: And what's interesting is that most of these limitations, I actually quite like, I find that just in terms of readability, it's a lot nicer too. Import explicit. Well state your imports explicitly instead of using wild cards and having the return time specified, especially if your IDE is still indexing or something, at least you know what's coming back. But also most of these things can actually be statically determined. So you can, I suppose, code mod most of these non-compliant files to be then Kosabi eligible. And by that, then gradually increase the build speed of your entire repository by doing this.

Navid: That's exactly what we do at Meta.

Pascal: That's why we talk so much about co mods here. They're a really powerful tool. So you talked that about the predecessor to Kosabi, what was that one called? Jas ABI, okay. That makes a lot more sense actually. Okay, thank you. So what were specific challenges that you ran into while bringing over Jasabi to Kosabi?

Navid: Compared to Java, Kosabi is a more modern language and it uses, uh, dialects or sugar coats that is easier for people to write, but it means that compiler has to do more work to understand or correctly identify types or the meaning of the semantics of the code. What we do with Kosabi is that we are shortcutting by not using the compiler and just using the PSI to come up with, the ABI of the library.

And that's, that's harder because of just the features that Kotlin uses and Kosabi has a hard time to support. So for example, um, delegations in super class is, is hard to support in Kosabi. We're not able to support them yet, uh, especially if they're not declared in the same package. The other part is because of all the annotation processors. Third party or internal annotation processors that we use, in our build code base in our big code base.

Pascal: Got it. But that sounds like there's still something in progress. So who knows? Maybe next time we chat, this is already all supported. So let's talk about this a bit more in general. If you now use the new Android tool Chain 2.0 with Buck2 for your project, what can people do to actually keep the compilation fast?

Iveta: So one of the key strategies Buck2 promotes is using small modules. When you have just a few files in each module with Buck2 support for increased and distributed builds with tools like Kosabi, which is discussed, your build can be highly paralyzed and distributed across multiple machines. So you can end up building only the minimum number of modules on your computer. But as all this optimization happens in the module level, keeping your module small means that the compiler has actually less work to do inside of those modules, which keeps your overall build fast.

Pascal: Got it. And this feels probably a little contradictory to people who work with Gradle because there is, I would say, fairly large overhead to introducing new modules. By default, you will probably have like one big one for your app, maybe one for your tests, and that's basically it. So if you just kinda switch it over to Buck, you have none of the kind of benefits that you've just described. So what do you do if you introduce Buck and this mantra wasn't followed before?

Iveta: So this was actually the case for WhatsApp, because WhatsApp had been using Gradle for a long time, where as you mentioned, there was not such a strong impasses on using small modules. So the app was modularized, but the average number of source files per module was still approximately 20 times higher than in the apps that used Buck. So, although a lot of work has been done to modularize further, you know, such work takes time and especially at Meta scale and some modules can be even quite difficult and risky to split because their logic might serve some core functionality which nobody wants to touch and break, but we actually found out that WhatsApp was not the only one suffering from large modules. Even apps using Buck had some medium sized modules, not many, but they still started having quite an impact on the build times as the main bottleneck. So we started looking at the possibility to optimize further by introducing incrementality inside modules, starting with the Kotlin incremental compiler built by Jet brands using build tools API.

Pascal: That sounds kind of simple enough. There's something already in the compiler, so I guess this was just plug and play, right? Now, I know that wasn't the case. So talk a bit about what the challenges were in integrating this with Buck2.

Iveta: So, yeah, I mean, you're right, the tool was already built, so, but the main challenge was still to ensure compatibility with other tool chain and with other tools. So, uh, for example, right at the beginning of integration, we found out that the Kotlin compiler library, which we used is not compatible with build tools, API, because they are actually two flavors of the Kotlin library, Kotlin compiler itself, which is the unshaded version, and Kotlin compiler embeddable, which is the shaded version.

Pascal: I've run into this before. This is an absolute nightmare.

Iveta: Maybe to quickly explain what shading is, it is a process where packages and classes in the library are renamed to avoid version conflict. So imagine, for example, that you have library A that uses class C of version one and library B that also uses class C, but of version two. So, right now you have two C classes, one with version one and second with version two. If both classes keep the same name, library B could end up using a wrong version of class C in runtime. Resulting in app not functioning correctly or even crashing. Shading solves this by renaming classes so that they have unique names in each library. So in this concrete example, you would end up with two different C classes.

But back to our problem. So we simply could not use our unshaded compiler library, where the shaded compiler library was expected since these classes in these two versions are different from a runtime perspective. But because we were just building a prototype at the time, we needed a way to resolve this quickly and just move forward. And replacing the Kotlin compiler library by the embedded one across the entire code base would have been a big effort. So to keep moving forward, we did a quick workaround by unshading the build tools API instead, which made it compatible with the Kotlin compiler library we used.

Pascal: That's exactly what I did as well. Using Jar Jar over it to strip all the prefixes or suffixes. What was that? Basically what you did?

Iveta: Yeah, yeah, yeah. We did exactly the same. We used the same library to strip prefixes. But yeah, I mean, once we confirmed that the solution, uh, worked as expected, we then did the full migration.

Passy: Okay. And what, what else did you run into that wasn't quite obvious from the start?

Iveta: Oh, okay. So another good example is ensuring compatibility with other compiler plugins. A compiler plugin allows to inject additional logic into the compilation flow, which is executed as files are being compiled. Okay, but with incremental compilation, the set of files being compiled can be smaller. So some of other compiler plugins started producing incomplete results, but, but that's not all because the Kotlin incremental compiler can internally trigger multiple rounds of compilation, which means that your plugin can be triggered multiple times too, potentially overriding its own results. So, we had to update all impacted plugins to support incrementality and to make them to accumulate results during compilation instead of replacing them, but as far as I know this is non-issue and there should be some improvements in Kotlin version, uh, 2.3.0.

Pascal: Amazing. So when you integrated this, I would imagine for this incremental compilation, you need to store some sort of state or these temporary artifacts somewhere. Did Buck2 already give you all the abstractions, the kind of primitives you needed for this out of the box? Or did you need to build something new into Buck2 to actually realize that?

Iveta: Okay, so Buck2 has a feature that, it's called Incremental Actions, that ensures that your previous output is not removed. So whatever the compiler generated on the previous uh, run was persisted so that it can be used for incremental run during the next compilation.

Pascal: Amazing. So that means it was basically already there, all the stuff that you needed, presumably because somebody else also implemented incremental compilation for different kind of language, and you were able to use the abstractions that were in place. Can you talk about the speedups that you saw from unleashing incremental compilation on our code base?

Iveta: Oh yeah. So, you know, but the result very much depends on what's changed. Because there can be a significant implement from a small change in a large module, while the same change in a small module might not be that significant. So to get an overall impact, we run A/B testing on selected apps, and we saw around 20 to 30% drop in local build time of individual targets on average. But for large, pure coding modules, the number went up to 50%.

Pascal: Yeah, and that's quite significant because these things are exactly what you are probably seeing the most of. If you are iterating on something and you just basically want to effectively refresh your screen, just change the padding, compile them module, you put it back on your device. This compilation probably makes up a huge chunk of time that you otherwise just sit there waiting, sword fighting in your chair.

Iveta: Exactly.

Pascal: Just shifting gears a little bit, I also wanted to touch on something else that Joshua actually mentioned in the beginning that you're also responsible for upgrading the various third party libraries that we have out there. So how do you manage third party dependencies like Android X in our mono repository?

Joshua: Right, so every Android developer knows that we use third party dependencies all the time. The Android X libraries are essential. So, unlike Gradle, Buck2 does not have any kind of native concept of package managers or how to resolve third party dependencies. The way that we handle this with Buck2 is we actually have a separate tool, we call this artificer, which is a bit of a Dungeons and Dragons reference, if you are into that. So, basically we given some top level dependency we resolve the full dependency graph with this tool and we download all the jars to the repo and we create a Buck target or module for each of these dependencies. So some of the advantages of doing it, ahead of time are separately from the build. Is that this entire resolution and downloading process can be completely decoupled from the build process itself. We don't run into, for example, Maven rate limiting us when a developer's trying to do a build or we're trying to run a build in CI.

Pascal: Or of course, the server being down, which never happens.

Joshua: Exactly, so yeah, reproducibility of our builds is extremely important to not lock developers. Also all of this work and computation can be done ahead of time. So at build time, the only thing Buck needs to do is, you know, parse the targets and, and start building. So there's, there's nothing we need to do at the build time.

Pascal: Fantastic. I was not aware of the Dungeons and Dragons reference. I need to watch out in my Boulder Skate three play through if I see any references to that tool. I think we unfortunately slowly need to wrap up, but one question I definitely want to get out there because I think there will be a nuanced answer, but maybe I'm wrong, is should everybody not just switch over their Android apps and the build systems to Buck2?

Joshua: Right. So I should preface by saying changing your build system is a big investment. So, there can be a big potential if you have a large code base and you're looking to explore whether Buck2 and the different optimizations that we have for Android can be useful to speed up your build times. Also this Buck2 Android open source support is new yeah, we we're still looking for partners in this space. So, I'll say if you're interested in partnering with us and, and prototyping Buck2 Android support in your app we're very interested in working together.

Pascal: And from what I've seen, I think Buck2 is not one of those kind of throw over the wall and just kind of let it sit out the repost. But it is something where people actually engage in the pull requests. Stuff gets merged and all that. So yeah, if you're interested, follow the link of the show notes. Everything is nicely documented. There's a beautiful DocuSource page telling you how everything works. But now maybe as a last question for all of you is what? What's next for you? What's next for the DevX team? What's next for Android? Build speed.

Joshua: Sure. So internally, you know, we talked about these optimizations, Kosabi, and incremental compilation. These are not applicable to every single target or module in our code base. In a perfect world they would be. So we're constantly working to apply these optimizations to as much of our code base as possible.

Uh, thinking about externally, of course we've open sourced our Android rules. We want to make sure that these optimizations are enabled by default and open source so that developers who want to try Buck2 Android support will get these out of the box, right away. Of course, I, I already mentioned we're looking to partner with people, so if you're interested, we look forward to working with you.

Pascal: Amazing. Navid, Iveta, do you have anything you are particularly excited about?

Iveta: We are rolling out Kotlin incremental compiler across, across other apps and I am like very excited to see what's going to be the overall build speed win.

Pascal: That is exciting. I'm looking forward to what the workplace post about the results.

Navid: One other area that we are investing in, and that's early phases, is we are trying to see if we can leverage artificial intelligence to help developers break, uh, large and complex modules into smaller segment. This task is, by the way, very hard, even for human beings. It's very time consuming it comes with lots of risks and requires lots of effort. So we are experimenting and trying to see if we can solve that hard problem with AI.

Pascal: Fascinating, and again, can't wait for the post, but for now. I can only thank you all for ensuring that we can move fast despite our repo just getting larger and larger languages getting more complex, and yet build time's actually decreasing, which is definitely no mean feat. So, Joshua, Navid and Iveta, thank you so much for all of this and for joining me here on the Meta Tech Podcast.

Navid: Thanks for having.

Joshua: Thank you, Pascal.

Iveta: Thank you.

Pascal: And that's it for another episode of the Meta Tech Podcast. If you're interested in learning more about Buck2, Kotlin optimizations, or how Meta's DevX team is pushing the boundaries of developer productivity, check out the show notes for links to documentation and open source resources. As always, if you have feedback or topics you'd like us to cover, reach out on Workplace or Threads at @MetaTechPod. Until next time, keep building fast and stay curious.

RELATED JOBS

Show me related jobs.

See similar job postings that fit your skills and career goals.

See all jobs