Feb 10 2026
From C to Rust on Mobile
Listening time: 47 mins

Guest(s): Elaine Wong, Software Engineer; Buping Wang, Software Engineer

What happens when decades-old C code, powering billions of daily messages, starts to slow down innovation? In this episode, Meta engineers Elaine Wong and Buping Wang, share their work on a bold project to rewrite the decades-old C code in one of Meta’s core messaging libraries—in Rust. Neither came into the project as Rust experts, but both saw a chance to improve not just performance, but developer experience across the board.

In this episode, they’ll dig into the technical and human sides of the project: why they took it on, how they're approaching it without a guaranteed finish line, and what it means to optimize for something as intangible (yet vital) as developer happiness. If you've ever wrestled with legacy code or wondered what it takes to modernise systems at massive scale, this one's for you.


Transcript:

Pascal: Hello and welcome to episode 76 of the Meta Tech Podcast, an interview podcast by Meta where we talk to engineers who work on different technologies. My name is Pascal and one of the most under-discussed aspects of the accumulation of CO2 in the atmosphere must be how build times in summer continue to get longer and longer with laptops across the Northern Hemisphere throttling for increasingly large fractions of the year. Why are there no studies about this?

As much as we'd love to work exclusively on sparkling new codebases filled with a latest and greatest, the truth is that much of our industry runs on decades-old code, battle-tested, stubbornly irreplaceable, and deeply embedded.

Things get trickier still when that code is running on billions of mobile devices, each with their own charmingly unpredictable quirks. At some point, though, technical debt becomes more than just a nuisance. It actively gets in the way.

And at Meta, where developer velocity is taken very seriously, that's a problem worth tackling. That's why Elaine and Buping set out on an ambitious incremental rewrite of one of their core libraries powering messaging across Facebook, Messenger, Instagram, and our Mixed Reality platforms in Rust. The original library, written in C, has enabled billions of people to communicate daily, but over time it's become a bottleneck for innovation and agility.

Despite being relatively new to the language, Elaine and Buping are now knee-deep in the process of rewriting this foundational piece of infrastructure. In this episode, we discuss the joys and pitfalls of the rewrite, the importance of mentorship, and how they approach a project where the primary goal isn't flashy new features, but something subtler and arguably more important, developer happiness.

And now, here's my conversation with Elaine and Buping.

We're going to be doing something a little bit different today. Instead of presenting you with a finished project, I'm putting you right in the middle of it. Because if you work somewhere, that's where you will spend 99% of your time, often without knowing if the result will actually be successful.

More specifically, though, we will be talking about Rust on Mobile, the upsides, the downsides, the challenges, and the successes. And to discuss all this, I have two brilliant guests with me. Elaine and Buping, welcome to the Meta Tech podcast.

Elaine: Hello.

Buping: Hello. Thanks for having us.

Pascal: Elaine, can I maybe ask you first, we always like to introduce our guests a little. How long have you been at Meta and what did you do before?

Elaine: I've been at Meta for almost six years now. Time flies, I guess. This is my first job out of college. So I, this is, yeah, I just, I didn't really do anything before Meta. I had some previous internships at Reddit and Google. And yeah, now I'm here.

Pascal: Amazing. And what about you, Buping?

Buping: Yeah, I've been at Meta for 11 years. Also time flies, I guess. I was also like fresh out of college. But prior to that, I did one internship at Meta also.

Pascal: 11 years.

That's really long. I feel like most of the time I'm the one with the longest tenure here, but you, I think, are now ahead of me by two years-ish. So yeah, impressive.

Elaine: He's an OG.

Pascal: For sure. Can you tell me a bit about your team's mission before we dive into the actual, well, okay, maybe not project, but the things you're doing. Gimme your team's mission.

Elaine: I guess our mission is to provide a reliable messaging library that can work across our family of apps. We work on the new encrypted messaging stuff. So we're just working on shipping that everywhere, including IG and Messenger.

Pascal: And perhaps it's useful to just talk a little bit about the scale, because when people hear messaging app, they might not necessarily understand where it is. Can you tell us a bit about the apps or different scenarios where your library or libraries are used?

Elaine: Our libraries are currently used on every, actually every mobile app in both the Facebook network and the IG network. So Facebook, Messenger, FB Lite, Instagram, Instagram Lite. Yeah, basically everywhere except WhatsApp.

Buping: I think to add there, we also have Wearables and VR headsets. Our code is also kind of there. So powering the messaging experiences over there.

Pascal: I was just going to ask about this because this is only increasing in importance at the moment as we are rolling out new features on the glasses and on Mixed Reality headsets. So you wouldn't want to start from scratch every single time you put a new Messenger into one of these devices.

Buping: Exactly. Yeah.

Pascal: So can you talk about the current state of native code in our apps? Because I kind of gave it away in the intro, but this is not written in Java and in Objective-C per platform, but there's one shared library that is actually properly cross platform and not just cross app.

Buping: So like, if you look at basically in terms of line of code, I would say mostly the code is still written in the app native language, meaning like on Android, it would be Java, Kotlin. On iOS, it would be Objective-C and Swift.

And over the last couple of years, we've been pushing for a kind of also modernization of the codebases. So like we've had like Java to Kotlin migration. We also have like Objective-C to Swift migration.

But there's also a kind of code that's like sitting at a low level that's been shared across everywhere. That's primarily written in C family language, meaning C and C++.

Pascal: Right. And can you talk a bit about where you actually choose or where this is a good fit? So what level is basically low enough, as you just said, it's usually the low level code to use these languages? Because I would imagine most people don't write graphical user interfaces in C++ and then use them across Android and iOS, for instance.

Buping: Yeah, I think if you like to look at like a simplified version of like an app architecture, like you have obviously the networking layer at the bottom that talks to the server and gets data off the wire. And then on top of that, you have on the client side, you have this kind of, we call it data layer that transforms and persists the data.

And then you have the UI layer that kind of presents data to the user or takes user input. Basically, generally, we feel the lower you are in terms of that kind of diagram, the closer to the network, the more value props you can find in writing, you know, Axplat or cross-plat languages. Whereas as you go up towards the UI layer, then you will find more kind of idiosyncrasies on the platform and facing more kind of feature development. That's where, when you lose some of those value props.

Pascal: Yeah. And I guess at this point, the specialization itself can actually be a value add. You don't want everything to look exactly the same between platforms, but actually match the user's expectation there.

Buping: That's true. Yeah.

Pascal: But for network, I don't think I expect my network layer to behave differently when I pick up an Android phone or an iPhone. So I think in that case, it makes sense to unify this.

The messaging library that you work on, I saw is actually written in C and not C++. What was the historical reason for choosing one over the other when it was first created?

Buping: I think it's also worth noting that we're talking about C, but it's not like the vanilla C. It's actually C plus Apple's core foundation runtime-like architecture. So it's like, we call it like a messaging core foundation.

It's like abstraction on top of a core foundation, and we build the Android equivalent of it. So then pretty much we can just make it truly portable across iOS and Android. I think this library was basically the runtime was built a few years back.

I think a lot of that decisions predate our involvement. But if my understanding is correct, I think it's really kind of two reasons. One is at the time, we were very kind of strict on binary size.

We just really wanted to be as lean as possible in terms of the binary footprint. And also being like in Axplat, basically to also meet our performance criteria, essentially C with this runtime is basically the best option we could find. So, you know, like on iOS, for example, because we're using core foundation, we didn't have to kind of rewrite another standard library.

And also by doing core foundations is also kind of polymorphism. So like we don't have to pay the cost of monomorphism either. And then by doing that also, we basically disallowed like a potential risk of like people pulling in third-party libraries like Folly and Boost that will just get your binary bloat very quickly.

So I think the second reason is probably more around how you would actually have your codes be used on iOS and Android. Because we see that's like the most stable API ever. So like we are able to essentially expose APIs and data structures that are very easily adapted in the environment, especially for iOS.

Because all the data structures we're using are kind of essentially core foundation-based, we actually can have a toll-free bridge into iOS. So you don't have to pay any cost in getting data across the boundary. And obviously for Java, we have to go across the JNI boundary, but it's still like much better than if you have to deal with like a C++ class, for example, that's having like crazy features on it.

So I think that's basically the two major reasons I can think of. I do feel like the second reason still stands. So even let's say in the Rust world, having like a C API exposed to the outside world is still like very valuable for us.

Pascal: For sure. It's been a while since I've worked on like Android open source libraries that had any C++ involvement, but just figuring out what the right C++ standard library is there and potentially dealing with mergers if there are conflicting versions in there, it's a nightmare. And yeah, you circumvent all of this if you just restrict yourself to C. And the other interesting part that was that you didn't even know exactly how it was started.

And I think that is such a common experience. Most of us will just work with legacy code at some point, right? And need to figure out how to actually be productive in this code base. And at which point do we just say, okay, it's enough. I think we actually need to invest in something. Sometimes we call it better engineering over here. So code modern modernizations. Sometimes it's a rewrite. Sometimes it's something a little bit more incremental. And I guess that's what we are going into next.

But before we do this, I also want to ask you presented some of the benefits now of C. I would also like to hear what you maybe have learned about the downsides of working on a large C code base.

Elaine: I think one of the biggest ones is like memory management. We've had like a lot of issues in the past, just needing to make sure that all the code is memory safe so that users don't have a bad experience like crashes.

Just to make sure like otherwise they don't have like reliable messaging app. That's not good. So and then there's also like on the one hand, you kind of standardized like the language feature set by like, you know, forcing people to use like the simplest thing possible, but it results in like very simple things being very difficult to do.

So like as an example, we have this like terrible practice where you like allocate or like you initialize all the variables at like the beginning of like a do-while loop or outside like of a like a scope that you can do like go to. And so you can have like a list of like 100 variables at the top that are all like initialized to null. And then like 1000 lines later, when you're done using all the variables, just to make sure that you didn't forget to release any of them, you like release all of them.

And like you like the fact that this is like a practice that is the easiest way to ensure memory safety. That is still not easy because you have to scroll 1000 lines to release your variables. It's like that's that's kind of sad for like, you know, code that's supposed to be highly scalable and used on like millions of devices every day.

And then I think there's also like a similar issue with like doing like, for example, like async work, just like having to write like a ridiculous amount of like code to kind of like pack up everything and like dispatch it places and then like call some other function that you define like 2000 lines ago. All that kind of stuff is just like really unnecessary in the modern world. So yeah, there's it's mostly like DevX and like just like velocity and convenience.

And then also, you know, like memory safety and stuff because that's that's not good if you can just like forget to release things and crash or, you know.

Pascal: For sure, it also puts a lot of pressure on the reviewer of the div, especially when it comes to what's that application actually freed up? Was it freed up in the right order? And is there not another use afterwards? So yeah, it's it's not just the engineer, but like all the layers down, it introduces complexities

Buping: Yeah, I think DevX is definitely important, but also kind of like Elaine alluded to, the consequences of having these memory issues are actually pretty significant. We actually had SEVs in the past.

Pascal: Just jumping in here real quick to tell you that SEV stands for site event and is used here at Meta to refer to what is more commonly called an incident and incident response processes. We use these liberally and not just for, oh my god, the site is down, everything's on fire moments, but all sorts of problems where it may be beneficial to just get a few more eyes on it.

And now back to the interview.

Buping: A few of them are actually extremely hard to debug. I remember like there was one like maybe like a year ago that just took maybe half of the people working on this whole big project to really kind of spend weeks to debug that.

And eventually, thankfully, we figured it out. But just imagine how during that time, because the basic symptom to users is basically they're experiencing crashes. So they've been experiencing crashes.

In the meantime, we've been pouring resources into finding that crash, which if we had chosen something better, this whole thing might have been able to kind of be avoided.

Pascal: At Meta we have this very open culture where we don't really have too much ownership over different parts of the code base. Basically, everybody's empowered to go anywhere.

Does that also pose a problem with your very kind of customized C code base?

Elaine: It definitely poses like a huge problem, especially when the code is inherently like so unergonomic. It just like causes spaghetti code to like spiral because people will come in not really knowing what the surrounding code is doing, but just need to add in their like own custom logging or like, I want to add in this special error handling behavior or stuff like that. And then they kind of just like tuck it into the code, you know, they add something into the like huge block of variables that's at the top and they release it at the bottom and it just it inspires like mess.

So I think that's one of the big issues and it just makes the code so much more unreadable. So much like just so much forking everywhere, really hard to manage that spiraling out of control.

Pascal: Yeah, there is something about this all broken window theory, but for code, I know it didn't really pan out to be super accurate in the real world.

But I do think in code basis, there is this bad code begets bad code thing happening. Because I definitely noticed this myself. If I go into a really nicely well structured code base, I will think twice before introducing something that is a little bit ad hoc, you may say hacky.

But if the code is full of little hacks, then I won't feel quite as bad about just adding my own little condition or that it's just if this parameter is passed in, execute this special behavior without thinking about what is the correct abstraction to apply here.

Buping: See, I think the hacky code exists everywhere for all the environments, all languages, not particularly C. But I think the C situation made it particularly worse is because, you know, like the verbosity we just talked about, right? There's so many things you have to do that you wouldn't have to do with other modern languages. But also, I think because of C being super simple, like you don't have a lot of like tools to design your system so such components are like all decoupled from each other and you have like a very clear delineation of responsibilities because ultimately it's like just kind of functions.

You don't have like things like interface, for example. But to the point where like we can't apply a lot of the design patterns that will actually prevent people from messing your code up. So that's kind of how we largely got into this situation.

Pascal: Yeah, I think that's a good clarification because as you say, it does apply on all of them, but C has certain characteristics that exacerbate this. And Elaine, you wanted to say something about an example or something?

Elaine: Before we kind of started our like better engineering or like rewrite push, I wrote like a document that was just a rant about all the problems that we're having with C. And one of the things that I wrote in there was that spaghetti begets spaghetti. So kind of just echoing your bad code begets bad code.

Pascal: Amazing. Yours is definitely still catchy, though. But so let's talk about Rust.

Pascal: Before we actually go into the specifics of this particular library and why you made that choice, can you talk a bit about the state of Rust on mobile? Because I was in this situation probably not half a decade ago, but I was working on Flipper, which is a little debugger that has a little native SDK that also ships for Android and iOS. And we wrote it back then in C++. And in this case, it is just a debugging thing that never ships in production.

So security, for instance, is not hugely important in this case. But we still struggled a lot with crashes and weird data races and all of these kind of problems. I looked into whether Rust was an option and the answer was a pretty definitive no.

Nobody had tried it before, really. There were just a few, maybe like three, four GitHub repositories where people had a proof of concept, but that was it. So can you talk a bit about the state of Rust in our mobile apps at Meta today?

Buping: I think we actually did a fairly thorough investigation into this before we kind of jump-started this whole migration project.

I think we do want to give a shout out to our internal Rust community, especially our programming language and runtime team. Over the years, it's definitely improved this quite a bit. So I think when you look at the support on mobile, I think there's probably a few aspects.

There is the developer experience, obviously. So in your IDE, how does the experience look like if you're using IDE? So I think one of the nice things about Rust is the very powerful kind of compile time checks. Like Rust Analyzer, for example, we've actually kind of integrated it very well in our IDE. So you can actually see a real-time error warning. You can see real-time type deduction results inlined in the code. That's extremely useful.

I think autocomplete has been great, especially with the AI-based version. You don't have to write a lot of codes manually nowadays. That's particularly good with Rust compared to C. I think, for some reason, it's kind of a little bit lagging in that regard.

And then you have the Rustfmt, which is also another super powerful tool. With the IDE integration, basically whenever you save a file, you see all the codes you've written formatted in a really nice way. That's super nice.

Debugging experience has also been very good. Basically, can you put a breakpoint and have the debugger pause? When the breakpoint gets paused, can you see the stack trace? Can you see the values of variables? All of those supports are there, which is very, very good for us.

Pascal: I love that you are calling out, you can set a breakpoint and the debugger actually stops.

Because it sounds like table stakes, but the number of times across different languages this particular behavior doesn't happen is just incredible.

Buping: Exactly, exactly. That's also somewhat above people's expectations.

We got a message today, somebody pinged me, like, I was just surprised to see the debugger pause. Basically through, because they were debugging C code and then stepped through, and then they can actually get into the Rust code and see things continuing. So that's pretty good.

And then I think beyond the IDE, you know, our mobile apps are built on top of our box system. And because we talk about our apps pretty much everywhere, initially when we started the project, there's so many different variations of build errors. Just to think about, oh, there's a Windows particular issue you have to think about. There's an Android emulator problem you have to think about. There's an Apple silicon problem you have to think about. There's a Linux problem you have to think about.

So I think we've come to a point where we pretty much kind of overcome all those kinks. And we can get to a point where you can just basically land the Rust codes everywhere where our code is linked, which is pretty much also everywhere. So that's also very good at this point.

And also there's another aspect, which is production signal. We have this very mature, streamlined pipeline for you to be able to kind of analyze crashes in the wild. So if you get a high-firing crash, you would get a task and then you would receive a very clear stack trace so you can debug further.

It wasn't like this case, I guess, months back. But now I think it's also kind of gotten improved quite a bit for Rust stack trace in particular. So we've recently received crashes where you can clearly see the stack and the symbols being symbolicated and then where the crash line is.

So in terms of that, I think we've also come a long way.

Pascal: A long list of features that are just required to ship something into production. You can't rewrite a part of your messaging library that is used, as you've discussed everywhere, if support for one platform is just completely absent.

Or relatively small stuff. But if you don't have support for symbolicating traces or getting them back into their original state, and you can't debug issues that happen in production, that is also a complete deal breaker. And you couldn't put something that is so critical to many of our apps out there.

So we talked a lot about the infrastructure level support now and how far this has come, really, since I've last looked into it. But can we take a little step back and talk about why you chose Rust of many languages that are out there that would probably work on mobile? C++ rewrite, I guess, could have been an option. Zig is out there somewhere. I think it probably has a compilation target to these various platforms.

How did you end up with Rust?

Elaine: Well, I think one of the biggest things about Rust is the compile-time memory safety enforcement. C++ doesn't have that.

And I think since a lot of the issues that we face in the day-to-day are related to memory management, it doesn't make sense that we wouldn't choose a language that combats that most effectively. So I think that's one of the biggest reasons. I think the other thing is that, like we mentioned before, C++ has a lot of bloat that you can potentially add to it that is very easy to abuse.

Everybody has different experience with C++ and has preferred ways of doing things, and it becomes harder to make things standardized. So I think Rust being relatively newer, we're all kind of on an even playing field and we can all kind of work together to decide what the standard should be, which makes it a lot easier to manage that kind of bloat.

Pascal: Yeah, that's true.

There's basically only one big flavor of Rust that everybody writes, whereas in C++, I'm not sure. I'm sure there are tools that can disallow all the old features, so you only use smart pointers or something like this. But in Rust, you don't even have to really think about this.

The only big bifurcation that is out there is whether you write synchronous or asynchronous Rust, which still to me feels like I'd much rather enter a synchronous code base because I can much more easily wrap my head around it than the other. But did you end up setting yourself certain boundaries within the language, whether this stuff like no async, or for instance, we are fine with using clones everywhere, even if that means a few more allocations, but it just makes our developer experience easier?

Buping: I think we're still early in the adoption phase, so I think we might have let it play out and kind of revised what we allow, what we don't allow, every now and then. But so far, we haven't built very explicit impose any restrictions on what kind of language feature you can use or you cannot use.

But one of the things that's important for us is because we're essentially rewriting the existing code piece by piece, so any code we replace needs to work in their original environment. So in our use cases, because we're powering mobile, it's ultimately like a very constrained environment, so performance is pretty important for us. So as a result of that, there will be concurrent code, we can't avoid in-sequence code.

And also because we're using the, we're basically still kind of in this core foundation runtime, so a lot of objects are basically reference counted. So in that sense, clone is very cheap, it's just like incrementing the ref counter. But I can also imagine a world where maybe it's too hard to manage these references and lifetimes in the code base, just because we've kind of already gotten used to the kind of convenience of like args. So we have to see how things play out and how people actually use it, but so far we don't have very clear restrictions.

One thing I would add on async is, this is another thing we did that we feel like pretty cool, is like I think we actually really appreciate Rust's decision on making async await be an abstraction, and you can provide your own runtime. Our multi-threading framework or setup is basically on top of again Apple's Grand Central Dispatch (GCD) and we basically also build like Android equivalent of but all pure C interface, minus the blocks that you can use in Objective-C. It's been pretty good for our use cases, so we'd like to not change that up front, because changing threading model of your app is not something that you want to tackle as a first step for language adoption project.

So I think because of that abstraction, we are able to build essentially a runtime for async await on top of GCD, but also at the same time getting all the benefit of like the modern syntax, right? So you can write all your code in a more like a synchronous fashion, but also doesn't prevent us from down the line changing the runtime if you feel like so. So I think we're pretty much getting the best of both worlds at this point.

Pascal: That's really cool, I haven't heard too many projects from here that implement their own, I'm not sure what it's called, like reactor or something for the async await interface. I want to talk a bit about the experience of switching over. Can I maybe ask both of you what your prior experience in Rust was before you picked up this project? Elaine, what was it for you?

Elaine: I had no experience.

I only knew about the crab as the logo, which I liked a lot. I knew nothing else.

Pascal: That is really cool because I also want to just kind of showcase a little what it's like.

We are often just kind of moved around between different projects that use wildly different tech stacks behind it. And just talking a bit about the learning experience I think can be quite helpful for people to understand what this is like. So what was it for you? Who helped you basically onboard onto not just a new language, but a new language while you spin up a new project, which I feel like adds an additional kind of barrier to entry.

Elaine: I think, so the stuff that Buping discussed before about, you know, getting Rust ready for being in production on mobile, I think that was probably the biggest piece because there was like a lot of basic stuff that I would not have been able to do without that foundation being in place first. So I think him kind of diving in first and then like writing a lot of examples helped me a lot when I wanted to start out.

We had also a couple like just like meetings where we sat together and he would kind of help me through the code and explain some of the basic concepts to me, which also was like extremely useful. Like I think one of the ones that took me a little while to understand was the concept of like move in with like the memory management, which was, I had like dreams about it, like to make sure that I remembered it correctly. Like that's, it was a little difficult for me to learn, but like just having a good mentor was really helpful.

And then I think beyond that, we also have like a small like working group where we work with people who are specifically hoping to productionize Rust on mobile. And they were also very helpful, especially when it comes to code style. Like I think one of the trickiest things about moving to Rust is that there's so many different, like, I guess, syntactic sugar that you can use to make your code nicer.

And when I first learned some of them, I was, I was like really clunky with it. Like there was just so many things that I could have like inlined or something that I just didn't know because there, or maybe not didn't know, but like, there's just so many ways to do it that you can't remember all of them at once. And then you just being from coming from C where everything is super verbose, it feels natural to just like use the most verbose possible thing.

So my first couple diffs that I put up went through like several iterations of, you know, booking and camera and just kind of like looking at it and being like, you could do this, you could do that. And it was like kind of embarrassing. Like, I don't think I had anything to be embarrassed about.

Personally, I was just a little embarrassed. But yeah, I think just having people there to help me focus, like I was just trying to get the stuff working. And then having people who knew a little more about what they were doing, teaching me how to most effectively use the language was super helpful.

I also had, you know, I was also able to buy the REST programming language book and charge that to the company. So that was helpful.

Pascal: Ah, I've got it somewhere standing behind me. And I think I bought it for myself. So too bad.

But yeah, the ownership following you into your dreams, I can relate to this. But it's one of those concepts that never leaves you really behind. Even if you go back to a language that doesn't have it, I feel like I always think about it twice and realize how many mistakes I've made in them because it's not enforced, but all the rules are actually still valid. So it's often just, oh, you've introduced a little bug here that will probably explode at some point.

Buping: Yeah, exactly. I think this is not just like our experience.

We have also heard from other engineers, just like learning REST makes you think more when you're writing other languages, making you like an even more careful engineer, I would say.

Pascal: Yeah. But you also talked about code review and your experience or reaction to it is obviously entirely valid.

But I also just really like learning through this kind of feedback process that exists on diffs, whether it's a new language or just a code base where I'm not super familiar in. And people just say, hey, you can actually do this in a much simpler fashion on your diff, send it back to you, and you can apply this and learn something. It's just reinforcement learning, but for humans.

And I personally really enjoy it.

Elaine: Despite being embarrassed, I think it was a very enjoyable way to learn. And it helped me in later diffs.

Actually, as I was publishing them, I would look back at my code and be like, actually, I can apply all this stuff that was a comment on my previous diff and make it cleaner. I think my favorite moment was when I put up a diff and Cameron was actually impressed with the way that I had kind of inlined everything. He was like, wow, that was really smooth.

I was like, yeah, it took me like 12 iterations, but I made it. So that was cool.

Pascal: So that was cool. So Buping, what was it for you? It sounds like you've had some prior experience to Rust before you dove into this project.

Buping: I think I might have like a three month start of this, but no, I didn't have any prior experience. It was all brand new to me, although I did have like prior C++ experience. So I feel like a lot of the concepts can just be easily moved over to Rust.

You just have to kind of build the right mental mapping. So yeah, I think the nice thing about Rust, I would say, a learning experience is it's pretty canonical, right? So you buy the book, you read it and you start kind of, and the documentations are so great and you start writing codes. And if you have some background, like I did with C++, then it's not that hard to kind of to start writing, getting your hands dirty.

But I do agree that with the feedback loop is very important because even though, let's say, you thought you knew how to write the code, but there's always a way you can improve it. So I think that's, you know, Cameron being our kind of Rust guy also helped greatly with me as well. So I think forming that kind of group, it's also not that hard to find people that are passionate about Rust and Meta nowadays, I feel like.

Pascal: Yeah, we have some real industry experts here and people who contribute to the core language.

Buping: Yeah, exactly.

Pascal: A wealth of options. Has that community been helpful for you to solve some of the knotty issues?

XML

Buping: Maybe not in particular, but at least we do have, because this is another good thing about like Meta engineering cultures, like we do have this kind of workplace groups. It's like a Facebook group where, so we have the, you know, group dedicated for Rust. So people will actually make, you know, obviously we can see what kind of questions people ask and what kind of answers they get.

But also there's people who regularly posting kind of learning notes, right? How, you know, this is a topic, you know, let me kind of expand on it and, you know, put all the, you know, knowledge that they possess and share it to others. So I think that has been very good for just like us to kind of try to see like topic by topic, you know, like, you know, obviously we don't have to read through everything, but, you know, whenever we feel like, you know, what would, you know, actually think about this important topic, then we can actually look and we will oftentimes find very interesting answers. Also, I think in the age of like chatbots, I do feel like it's also like interesting time to learn a new programming language.

You do get a lot of help, you know, very quickly. Although, you know, I think nowadays you still need to be super careful. Like sometimes it's still kind of hallucinates and give you, you know, very, very, you know, it would just like seem very confident, but the answer is like completely wrong.

So I think you just need to take the grain of salt, but I do feel like that seems to be, for me at least, it has been like a very healthy thing to accelerate my learning. And so I think that's just another thing to point out.

Pascal: Yeah, I definitely agree.

I recently wrote a little bit of Rust and the first line of defense against newbie mistakes is usually the compiler itself, because it gives really helpful error messages with hints of how to actually fix the underlying problem. Unlike C++, where it usually just prints out 20 pages of template errors and you don't even know where to start. But the second layer is now that we have all these chatbots where you often just copy paste an error message and it will tell you, hey, try out one of these three things and one of them often works.

Can I also ask you about something that has been particularly challenging working on this migration so far?

Elaine: I think proc macros were really hard. I'm going to clown on Buping again, but that was literally my second or third day of writing Rust. And he was like, you know what would be really beautiful is if we turn this thing that we do in C into a proc macro version so that it's automated and nice.

And I was like, what the heck is a proc macro? And then I think learning that was just really difficult, learning how to parse the AST, which like I literally didn't know what an AST was. All of that was like pretty difficult, but also very rewarding. It took me like two days to like kind of go over, like going back and forth with Metamate, which is our like internal chatbot guy, like going back and forth with Metamate.

Like how do we do this? How do we do that? You know, asking stupid questions to Cameron so I didn't embarrass myself in front of Buping. Like all this kind of stuff to like finally put up like a first draft of the diff. But it was really awesome.

Like afterwards, I was like, I'm so excited to use this everywhere. But like kind of, I think in that same vein, just like I mentioned before about like all the syntactic sugar, the things that you can inline, like the stylistic stuff. I think that was one of the more confusing parts because it's also like very overwhelming at the start when you're like writing code and the compiler is just like screaming at you.

Like everything you're doing is like not correct. There's like red lines everywhere and you have to like fix all the red lines before you can even think about how do I make my code look nice. Whereas in C, like you can write lousy code.

It doesn't really matter, you know, and then you compile it after and you figure it out from there. But like having the little Rust formatter screaming at me while I was writing code, that was like putting me in like a high pressure environment that I feel like made it a little difficult and probably contributed to my first collection of embarrassing diffs that got commented on like 20 times. But yeah, I think that was probably the most challenging part.

I think also looking at more of like a development environment standpoint, I think one of the challenges is how Bindgen works with Rust, where it like exports like everything under the sun and you have to tell it like what not to export, given that we're like building on top of, so like right now our Rust is like a little bit sandwiched. Obviously we want like from some layer down to eventually all be Rust, but right now it's like sandwiched between like the C on top and like some C on the bottom and we like wrap these like C objects in like smart pointers. And so the, where was I going with that? I totally, oh wait, so this is all right.

So we have all these like underlying C objects that are also getting like exported and we can't, we need to like blacklist some of them. Otherwise we have like duplicate linking errors or like duplicate exported libraries that causes problems at runtime. So I think that's been like kind of annoying because we have to like come up with all this like weird regex to make sure that we're exporting the correct libraries.

And that's like a really in detail. I feel like that's a very specific, weird challenge that we're facing. But yeah.

Pascal: I feel like starting on, what was it like 4th day or something with a proc macro, that is really throwing in at the deep end. I've been doing like Rust on and off for, I don't know, maybe a decade or so. I've never written a proc macro.

Elaine: It's kind of like a time where I went skiing for the first time with a friend and then he was on my 2nd day. He was like, oh, you can totally do this blue slope. And I was like, no, I can't.

And he was like, Elaine, if I didn't think that you could do it, I wouldn't ask you to. And then I just like proceeded to spend an hour and a half, like falling down the mountain. I feel like that was like, that was probably Buping's opinion.

He was like, she could probably do it if I just push her down the mountain, it'll be all good. So yeah.

Buping: In the end, you did it, it's all worthwhile.

Elaine: I guess so. Yeah. I fell down the mountain for two days.

Pascal: Still in one piece. Well, that's good.

Elaine: It was fun.

It was good. It was worth it.

Pascal: Okay.

I think I need to slowly wrap us up, but maybe just one question, because we've now discussed this is an ongoing migration. This is not a complete rewrite, but it's like piece by piece as we usually do things here. How are you going to decide whether this is a success? How do you measure your success? Or is there any point where you would say, okay, this was interesting, but I think we're giving up at this point.

Buping: I would say the measurement for success, I think it kind of ties back to why we started this to begin with. So like we're facing memory issues, we're facing essentially spaghetti code that is very hard to maintain. We want to kind of get out of the situation.

So I think part of the strategy of the migration is obviously we're not going to boil the ocean. We're going to do like step by step, piece by piece approach. So every little step of the way, we should be able to see, hey, for this component, finally, we don't have to worry about this whole set of memory concerns.

And finally, we can actually design it to make it to the point where it's maintainable. So in every little step away, it's non-regrettable. But I think if you really want to look at big numbers, like engineer and time saved, or overall monitoring the trend of status caused by safety issues, I think we don't, so far, the time is too short to kind of make, you know, plot that line.

But also I think we don't have enough codes converted. And so again, we're still very early in the stage to plot that line either. And also we don't have enough people who kind of will retrain to work on this.

So all of these are kind of in progress. And then once we, I guess like we just need to kind of build this habit of like revisiting this every now and then to, you know, like to kind of look at all these kind of metrics and see how we've been progressing. I will also say, I think we probably already kind of acknowledge that this is not something that we can, so this migration is not going to eventually, or I should say, it will probably take a very long time for us to get to the point where we just convert all the existing code in our libraries from C to Rust, if that ever happens.

So I think we're okay. Like let's say if we're at a certain point where we need to pause this for a while, like let's say there's some very high priority work we need to focus on and we should, because of the approach we're taking that's like very incremental, I feel like at any, you know, given time, when we look back of the code that's already been converted in Rust, we would actually feel very good about the situation. So it's not going to be like, oh, you know, we've introduced yet another tech that then we have to revert.

So I think given that strategy, we do feel very good about how things are going. And I do feel like, even though the sample is very small, we are already seeing very positive feedback from these engineers or just like, you know, in terms of product numbers. So I think we're off onto something good here.

Pascal: That sounds like a very pragmatic approach. Cool. And now just one last question to wrap us up.

What do you do in your free time when you need to decompress from all the comments on your diffs and you're not in the middle of migrating away from legacy languages?

Elaine: Well, recently I've been going to the gym a lot. I mean, I always went to the gym a lot, but now I'm spending like three freaking hours a day in the gym. So, you know, I'm just really thinking about how angry I am about proc macros while lifting my weight, you know.

Pascal: Funneling the rage. That's amazing

Elaine: Oh, proc macros hit a PR.

I also have been playing like kind of like the cozy genre of Switch games. So it's like kind of two opposite ends of the spectrum. Anger, lifting, and cozy Switch games.

Pascal: Excellent. Excellent answers. What about you, Buping?

Buping: I'm more boring, I guess.

I like to go outdoors. So I try to find my time, you know, hiking and maybe do some biking. Now it's like summer in Seattle, so we're past the rainy season.

Pascal: I really envy you, Elaine. I used to have these three hours a day gym sessions, but they were put to an untimely end due to a nasty shoulder injury. But anyway, I want to thank you both so much for sharing your experience of how to migrate a legacy C codebase to Rust and joining me here on the Meta Tech podcast.

Elaine: Thank you.

Buping: Thank you very much.

Pascal: And that was my interview with Buping and Elaine. As I mentioned, this episode was a little different.

The project is still very much in motion. And while Buping and Elaine have taken a no regrets approach, there's no certainty they'll see it through to the very end. But let's face it, how many projects really come with that kind of guarantee? In this industry, you often have to prove not just that your work is impactful, but that it is the most impactful thing you could be doing right now.

So I genuinely appreciate how candidly they shared their thinking and their process. If you enjoyed this episode too, why not leave us a 5 crustacean rating in your podcast app of choice? Or drop us a message on Instagram or Threads where we are @MetaTechPod. And that's it for another episode of the Meta Tech Podcast.

Until next time, stay hydrated. Toodle-loo.

RELATED JOBS

Show me related jobs.

See similar job postings that fit your skills and career goals.

See all jobs