One thing that I've wondered is why sorbet didn't choose to use the stabby lambda syntax to denote function signatures?
sig ->(_: MyData) { }
def self.example(my_data)
...
end
Obviously this opens up a potential can of worms of a dynamic static type system, but it looks sufficiently close enough to just ruby. My opinion is that sorbet doesn't lean into the weirdness of ruby enough, so while it has the potential to be an amazingly productive tool, this is the same community that (mostly) embraces multiple ways of doing things for aesthetic purposes. For example you could get the default values of the lambda above to determine the types of the args by calling the lambda with dummy values and capturing via binding.
Personally having written ruby/rails/c#/etc and having been on a dev productivity team myself, I say: lean into the weird shit and make a dsl for this since that's what it wants to be anyways. People will always complain, especially with ruby/rails.
I don’t know if there’s a Ruby reflection API to produce the values of default keyword arguments in a lambda, but correct me if that’s not the case. This syntax is neat enough to be worth spending some time thinking about if you can correct me.
When it comes to "Ruby-like and statically typed", Crystal is such an amazing language. I think the design is incredible, but every time I try to use it, I just hit so many issues and things that slow me down. I think it's such a cool language, but every time I try to use it I just end up switching back to Ruby.
I've been a ruby dev for 15+ years; i'd really love it if ruby adopted a C# similar approach to typing (albeit, more ruby-like and more flexible). It's the most readable, simplest way I would enjoy as a rubyist. Everything else (including sorbet) feels bolted on, cumbersome and cringe. I appreciate the article and how it goes over the constraints; but genuinely sorbet is just not good enough from a DSL standpoint.
Type's can be fun and useful, and i'd love to see them incorporated into ruby in a tasteful way. i don't want it to become a new thing developers are forced to do, but there is a lot of utility from making them more available.
While I'm most familiar with C#, and haven't used Ruby professionally for almost a decade now, I think we'd be better off looking at typescript, for at least 3 reasons, probably more.
1. Flowsensitivity: It's a sure thing that in a dynamic language people use coding conventions that fit naturally to the runtime-checked nature of those types. That makes flow-sensitive typing really important.
2. Duck typing: dynamic languages and certainly ruby codebases I knew often use ducktyping. That works really well in something like typescript, including really simple features such as type-intersections and unions, but those features aren't present in C#.
3. Proof by survival: typescript is empirically a huge success. They're doing something right when it comes to retrospectively bolting on static types in a dynamic language. Almost certainly there are more things than I can think of off the top of my head.
Even though I prefer C# to typescript or ruby _personally_ for most tasks, I don't think it's perfect, nor is it likely a good crib-sheet for historically dynamic languages looking to add a bit of static typing - at least, IMHO.
Bit of a tangent, but there was a talk by anders hejlsberg as to why they're porting the TS compiler to Go (and implicitly not C#) - https://www.youtube.com/watch?v=10qowKUW82U - I think it's worth recognizing the kind of stuff that goes into these choices that's inevitably not obvious at first glance. It's not about the "best" lanugage in a vacuum, it's a about the best tool for _your_ job and _your_ team.
I think it's pretty usuable now, but there is scarring. The solution would have been much nicer had it been around from day one; especially surrounding generics and constraints.
It's not _entirely_ sound, nor can it warn about most mistakes when those are in the "here-be-dragons" annotations in generic code.
The flow sensitive bit is quite nice, but not as powerful as in e.g. typescript, and sometimes the differences hurt.
It's got weird gotcha interactions with value-types, for instance but likely not limited to interaction with generics that aren't constrained to struct but _do_ allow nullable usage for ref types.
Support in reflection is present, but it's not a "real" type, and so everything works differently, and hence you'll see that code leveraging reflection that needs to deal with this kind of stuff tends to have special considerations for ref type vs. value-type nullabilty, and it often leaks out into API consumers too - not sure if that's just a practical limitation or a fundamental one, but it's very common anyhow.
There wasn't last I looked code that allowed runtime checking for incorrect nulls in non-nullable marked fields, which is particularly annoying if there's even an iota of not-yet annoted or incorrectly annotated code, including e.g. stuff like deserialization.
Related features like TS Partial<> are missing, and that means that expressing concepts like POCOs that are in the process of being initialized but aren't yet is a real pain; most code that does that in the wild is not typesafe.
Still, if you engage constructively and are willing to massage your patterns and habbits you can surely get like 99% type-checkable code, and that's still a really good help.
> Related features like TS Partial<> are missing, and that means that expressing concepts like POCOs that are in the process of being initialized but aren't yet is a real pain; most code that does that in the wild is not typesafe.
If it's an object, it's as simple as having a static method on a type, like FromA(A value) and then have that static method call the constructor internally after it has assembled the needed state. That's how you'd do it in Rust anyway. There will be a warning (or an error if you elevate those) if a constructor exits not having initialized all fields or properties. Without constructor, you can mark properties as 'required' to prohibit object construction without assignment to them with object initializer syntax too.
Yeah, before required properties/fields, C#'s nullability story was quite weak, it's a pretty critical part of making the annotations cover enough of a codebase to really matter. (technically constructors could have done what required does, but that implies _tons_ of duplication and boilerplate if you have a non-trivial amount of such classes, records, structs and properties/fields within them; not really viable).
Typescript's partial can however do more than that - required means you can practically express a type that cannot be instantiated partially (without absurd amounts of boilerplate anyhow), but if you do, you can't _also_ express that same type but partially initialized. There are lots of really boring everyday cases where partial initialization is very practical. Any code that collects various bits of required input but has the ability to set aside and express the intermediate state of that collection of data while it's being collected or in the event that you fail to complete wants something like partial.
E.g. if you're using the most common C# web platform, asp.net core, to map inputs into a typed object, you now are forced to either expression semantically required but not type-system required via some other path. Or, if you use C# required, you must choose between unsafe code that nevertheless allows access to objects that never had those properties initialized, or safe code but then you can't access any of the rest of the input either, which is annoying for error handling.
typescript's type system could on the other hand express the notion that all or even just some of those properties are missing; it's even pretty easy to express the notion of a mapped type wherein all of the _values_ are replaces by strings - or, say, by a result type. And flow-sensitive type analysis means that sometimes you don't even need any kind of extra type checks to "convert" from such a partial type into the fully initialized flavor; that's implicitly deduced simply because once all properties are statically known to be non-null, well, at that point in the code the object _is_ of the fully initialized type.
So yeah, C#'s nullability story is pretty decent really, but that doesn't mean it's perfect either. I think it's important to mention stuff like Partial because sometimes features like this are looked at without considering the context. Most of these features sound neat in isolation, but are also quite useless in isolation. The real value is in how it allows you to express and change programs whilst simultaneously avoiding programmer error. Having a bit of unsafe code here and there isn't the end of the world, nor is a bit of boilerplate. But if your language requires tons of it all over the place, well, then you're more likely to make stupid mistakes and less likely to have the compiler catch them. So how we deal with the intentional inflexibility of non-nullable reference types matters, at least, IMHO.
Also, this isn't intended to imply that typescript is "better". That has even more common holes that are also unfixable given where it came from and the essential nature of so much interop with type-unsafe JS, and a bunch of other challenges. But in order to mitigate those challenges TS implemented various features, and then we're able to talk about what those feature bring to the table and conversely how their absence affects other languages. Nor is "MOAR FEATURE" a free lunch; I'm sure anybody that's played with almost any language with heavy generics has experienced how complicated it can get. IIRC didn't somebody implement DOOM in the TS type system? I mean, when your error messages are literally demonic, understanding the code may take a while ;-).
wow i certainly appreciate your perspective and insight as a regular C# developer! My experience was limited to building a unity project for 6 years and learning the differences from Ruby.
Another commenter suggested another language like crystal, and that might actually be what it really needs, a ruby-like alternative.
I love building libraries, so having the chance to talk about the gotchas with things like this is a fun chance to reflect on what is and is not possible with the tools we have. I guess my favorite "feature" in C# is how willing they are to improve; and that many of the improvements really matter, especially when accumulated over the years. A C# 13 codebase can be so much nicer than a c# 3 codebase... and faster and more portable too. But nothing's perfect!
What parts of a C# approach do you think Ruby should adopt? The syntax? I don't think anyone thinks of C# as having a particularly good or noteworthy type syntax or features—anything you can find in C# you can probably find in most other typed languages. I'm curious what specifically you think C#'s typing system is better at than e.g. Typescript, Python or Rust
Crystal still has tooling problems and compiling time problem. Both not easily solvable. May be Kagi will grow to certain size some day they could invest back into Crystal.
Hrm, i think this is a good point, It might be something better served by a ruby-like alternative. I'll have to tinker with crystal sometime. Thanks for this comment!
Interesting article, but to me it really defeats the point of Ruby. The hyper-dynamic "everything is an object" in the Smalltalk sense of the definition is much of what makes Ruby great. I hate this idea that Ruby needs to be more like Python or Typescript; if you like those languages use those languages.
I get that a lack of types is a problem for massive organizations but turning dynamic languages into typed languages is a time sink for solo developers and small organizations for zero performance benefit. If I wanted a typed language to build web apps with I'd use Java or something.
Hopefully Matz sticks to his guns and never allows type annotations at the language and VM level.
It’s all about message passing. Why not allow a compile time check if something respond to a message. Types/interfaces are good. Failing early is a good thing. The earliest you can fail is during editing, and then compile/interpret time.
In Objective-C, you also pass messages to an object, and it could be anything you want. But it would output warnings that an object/class did not respond to a certain message. You could ignore it, but it would always result in a runtime error obviously.
[someObject missingMethod]
Swift is a much nicer language in terms of typing, and I have started replacing some smaller Ruby scripts with Swift. Mostly out of performance needs though. (Single file swift scripts)
The lack of proper typechecking is one of the main reasons I would not use Ruby, especially in larger teams.
Unittests are fun and all, but they serve a different purpose.
> I'm using Ruby for a startup because it's 10x more productive for building a webapp versus a compiled language
I’ve been a Ruby/Rails dev for about 14 years. Just started working on Java projects at work and… holy shit, it slows down the entire organization and they don’t even seem to know how much of it is because of the terrible immovable object that is the mountain of Java code they made. Before this experience I had always assumed that rigid types alone would be a boon… oh how wrong I was. The rigidity just leads to people writing 10x more garbage to get simple things done, which causes a 10x worse maintainability problem. The rate at which I hear “oh lol just another NPE” is astounding, how is this constant occurrence not interpreted as a complete failure of the “type” system????
Maybe a crack team of Java Experts wouldn’t do this, but the reality of most software companies is that people of ALL skill levels are writing code. Poorly written Ruby is a whole lot easier to figure out (as an expert of Ruby) than 10x the amount of code in Java. I hate this.
I find types useful especially for smaller solo projects: because the test suite is much less complete, or even non-existent. Types give quick feedback for a bunch of silly but common errors.
It's entirely unclear to me that types reduce developer velocity on small projects; it seems roughly the same to me. I certainly don't see how it's a "time sink".
But in the end: if you don't like it, just don't use it? Ruby has never been about restricting how people use the language.
> I hate this idea that Ruby needs to be more like Python or Typescript
It's not be more like those, it's be more like helpful, author-friendly programming which is very much Ruby's ethos.
Every time I think about ripping out all of the RBS sig files in my project because I'm tired of maintaining them (I can't use Sorbet for a few reasons), Steep catches a `nil` error ahead of time. Sure we can all say "why didn't you have test coverage?" but ideally you want all help you can get.
As a Pythoner the most direct value I get out of types is the IDE being smart about autocompletion, so if I'm writing
with db.session() as session:
... use the session ...
I can type session. and the IDE knows what kind of object it is and offers me valid choices. If there's a trouble with it, it's that many Pythonic idioms are too subtle, so I can't fully specify an API like
not to mention I'd like to be able to vary what gets returned using the same API as SQLAlchemy so I could write
collection.filter(..., yield_per=100)
and have the type system know it is returning an iterator that yields iterators that yields rows as opposed to an iterator that yields rows. It is simple, cheap and reusable code to forward a few fetch-control arguments to SQLAlchemy and add some unmarshalling of rows into documents but if I want types to work right I need an API that looks more like Java.
If I understand correctly, you can do this with overloads. They don't change the function implementation, but you can type different combinations of parameters and return types.
LLMs can probably help maintain them so that probably could be solved if you start using LLMs more.
This maybe already exists, but it would be nice if RBS or Sorbet had a command you could run that checks that all methods have types and tries to 'fix' anything missing via help from an LLM. You'd still be able to review the changes before committing it, just like with lint autofixing. Also you'd need to set up an LLM API key and be comfortable sharing your code with it.
> I also wonder if we’ll eventually be able to pass type information to the JIT.
It's not like Ruby doesn't have types and the JIT can't determine them. It just happens at runtime, literally the defining feature of dynamic languages.
> There was also a project called TypedRuby, largely a passion project of an engineer working at GitHub. After a few weeks of evaluation, it seemed that there were enough bugs in the project that fixing them would involve a near complete rewrite of the project anyways.
There's 6 open bugs and 4 closed ones. This seems like either it's throwing shade or they didn't bother lodging bug reports upstream.
We were in direct contact with the author, most bugs were reported over email. The project was a hobby project and we did not feel it to be prudent to flood someone’s hobby project with dozens of issues created by a team working at a venture funded startup.
man i always bounce between loving ruby's speed to build and wishing for just a bit more built-in safety tbh. you ever feel like adding more types actually slows you down or does it make big teams way less stressed?
I've been a Ruby dev since 2006 and started using Sorbet fulltime at work about a year ago. After about six months I started wanting it in my personal projects. Not enough to add the dependency, but if Ruby had it built-in I would probably use it. For myself, it makes it easier to work on code I'm not familiar with either because I didn't write it or I wrote it longer than X days ago.
The Sorbet syntax is pretty bad, though. And things go from bad to worse as soon as you get a bit tricky with composition or want to do something like verify the type of an association.
I haven't tried inline RBS comments yet, but the DSL does look more pleasing.
The team in general had mixed views. Some hated it, some liked it, most developers just don't really care–although now the Sorbet advocates are claiming it helps the AI, so now there is that leaning on the Sorbet lever as well.
If you only work in small teams, or small projects, I could see this opinion getting formed and maybe agree.
But having worked on large projects... Types are fucking great. So much value exists in simply having a tool that highlights everywhere in the codebase my change impacts.
It makes refactoring at scale possible and expedient in a way that is simply better. Period.
I can be fearless changing things, because I don't have to wonder about all the places my change might have broken - it literally gives me the exact set of files and lines I need to go fix. I don't even have to rely on tests.
It's... Better. It's better even for my personal projects, and it's unimaginably better for my 300 engineer org.
Perhaps Devil's advocate, but my rebuttal would be that we're not necessarily comparing similar gains to correctness as to optimisation, and that perhaps the correctness gains can be better attained another way where the optimisation gains cannot.
(Personally I find a type checker tremendously valuable for communicating with myself - and my team, I suppose - about assumptions being made in other parts of the program, which can probably be accounted "correctness".)
Also what's good for enterprise isn't good for everyone.
I get that orgs probably like TS so that newbie devs don't do crazy things in the code, and for more editor hand-holding. But it's not valuable for everyone, if it was actually better than everyone would be using it, not just some people.
I'm a professional developer with 20 years of experience and I wouldn't dream of starting a new side project—even with myself as the only developer—without types.
I've learned by hard experience that past me was an idiot and future me is clueless. Types are executable documentation that I can leave behind for future me so that future me can get instantaneous feedback while refactoring the project that past me wrote.
Any reason you stick to ruby? Why not use any statically typed language which offers this feature out of the box? We don't argue against typing. We argue about typing like it is done in ruby rbs/sorbet.
But the amount of money going by is so huge they can take a tiny percent of it to pay their server bills.
Dynamic languages in large systems have been controversial since there have been dynamic languages. It was routine for heroic AI systems of the golden age (1980s) to be prototyped in Lisp and then be rewritten in C++ for speed. You'd hear stories about the people who wrote 500,000 lines of Tcl/Tk to control an offshore oil drilling rig and the folks who think "C++ is a great language for programming in the large" but who don't seem to mind having a 40 minute build.
> But the amount of money going by is so huge they can take a tiny percent of it to pay their server bills.
I think that's an oversimplification that doesn't do justice to the complexity or difficulty of the issue. Both sides of the argument can make a really good case.
Slow programs (due to language or other factors) increase server costs, but that's only part of the problem. They also increase operational and architectural complexity, meaning you spend more time, effort and specialized expertise on these secondary effects. Note that some of that shimmers through in the article, mentioning "CI flakiness": It's probable that the baseline performance and its effects is a factor there. Performance has a viral effect on everything, positively or negatively. They also complain about async not being a part of the story here, which is also a performance issue.
On the other hand, this shop has hundreds of developers. That means deep expertise, institutional knowledge, tooling, familiarity with each line of code and a particular style that they follow. Ruby is also a language that is optimized for programmer experience, and is apparently a fun language to work with. All those things are very essential investments, individually and as a whole. Then we can assume that they use Ruby like most glue languages are used and either write C/C++ modules where appropriate or attach high performing components into their architecture (DBs, queues, caches etc.)
The reason I'm harping on this point is because I think it's important to acknowledge a more complete reality than the "hardware is cheap, people are expensive" statement. I assume it's very likely a throwaway line and not a hill you'd die on. But it masks two interesting and important things: Performance matters a lot and impacts cost transitively, but people and culture ultimately dominate for very good reasons.
I think you could have very happy devs working on a system like Stripe in either Java or Ruby, the choice of language is less important than everything else.
Personally I would judge an engineering manager on how seriously they take issues with the build. The industry standard is that a junior dev having trouble with the build is told he's on his own, the gold standard is that he gets a checklist and if the checklist doesn't work it is treated as if the dev had said "I have cancer".
A fast language can be slow to build. [1] If you were really designing a system for long term maintainability fast and sensible tests and solid CI would be determining factors. The only velocity metric which needs to be measured is the build time.
At the moment I have a side project I'm doing which is a port of the arangodb API to postgresql intended to bring a few arangodb applications into the open source world. I test at the level where I make a new postgres db and do real operations against it because I really need to do that to soak up all the uncertainty in my mind, never mind test the handful of stored procs that the system depends on. I have 5 big tests instead of the 300 little tests I wish I had because it takes 0.2s to set up an tear down the fixtures and if the tests take too long to run that's really a decision that I'm not going to do any maintenance on the thing in the future.
I am working on a React system which is still in an old version where test frameworks can't tell if all the callbacks are done running which means practically I don't write tests because even though I'm a believer in tests, it already takes 50 sec to run the test suite and frequently as much time to update tests as it takes to make changes. Had I been writing tests at the rate I want my tests would take 500 sec to run.
[1] Though I'd argue that performance concerns are one reason we can't have nice things when it comes to flexible parsers and metaprogramming; Python's use of PEG grammars has so far fallen so short of what it should have been because of these concerns, no unparser to go with the parser, no "mash up SQL or some other language into a certain slot in the Python grammar"
Indeed. But I guess having something like 20 million lines of Ruby code and the people that built that, means it makes more sense to spend time on making Ruby safer than re-implementing it all in something like Java.
Especially given the current code base is battled tested, while any re-write would almost inevitably introduce bugs.
With something like Ruby you can also just reimplement certain functions in the compiled language of your choice. The nice thing about dynamic languages are that they're great scripting languages and great in a polyglot codebase.
As a meta comment (responding to your question about downvotes): This is a very, very frequent topic of discussion any time Shopify, Stripe, Github or another huge Ruby shop make a technical post that's interesting to Rubyists. It's very annoying to have to retread the exact same discussion over and over again just because some people have to make this comment every time Ruby is even tangentially mentioned. So people aren't downvoting because they blindly disagree, they're downvoting because this is an uninteresting comment to leave on this article specifically.
Ruby is a language that grants you lots of freedom. Including often the freedom to shoot your own foot. Bolting on types to it is something I expect to be possible in Ruby and completely in tune with one or both my previous statements.
Ruby is a language that grants you lots of freedom.
Nobody can argue that. With freedom we can do anything, but anyone wondering why in first place?
What about wisdom and pragmatic approach? If you ever need types why not choose typed language for your case, but instead re-implementing something and wasting so much time for work that has no real use. Why not use c-bindings for typechecks, this feature is already available in ruby *rb_obj_is_kind_of(value, rb_cString);* and has no cost. Is it wise? I don't think so, I would be shocked if I saw such code in production.
We care so much about environment and climate changes, but think it is ok to waste resources for such activity. It is not research task. People really advocate to use in real-world applications such thing as sorbet. DHH even removed typescript from his frontend libraries, and this types is like kindergarten for some homemade rack/hanami/dryrb application that will server 2 people in production (creator of this stuff and random visitor). Maybe someone could tell me why we need such thing in ruby ecosystem like rbs/sorbet library. I'm really wondering. Not because ruby grants us freedom and we can do that. That's not an answer. We can do enormous count of absurd things. But the question is why?
Woah there, you're making a straw man here but I'll bite. I'll preface it with the fact that I also think types are against the very nature of Ruby as I know and practice it.
But, why not choose a proprer statically typed language? Because you've inherited a mastodon of a legacy app that started simple, then matured with thick layers of features. How can you wrangle this? Oh just rewrite it in Rust. Or you could seek an iterative approach and introduce tools to help you. Like bolted-on types.
Oh and I'm not sure I follow what all this about the environment is... If one is so conscious about it, maybe choose a different field entirely?
> Why one would try to make ruby "look like" statically typed language? It violates the idea of ruby, which dynamically typed language.
People said the exact same things when Typescript was first being released. I think history has proven all those people decisively wrong. I think you'd need a compelling reason that Ruby has some je ne sais quoi that Javascript doesn't in order to support your point.
The thing is I don't want ruby to meet the fate of javascript. Javascript community is much bigger than ruby will ever be. They had to manage so many people writing too many code, and invented tool for them so they can contribute without making them learn completely new programming language. Easier to jump in. And I should mention typescript types is much better than what we have in rbs/sorbet. They have types inside language, without annotation like what we have separate sig DSL in ruby.
We should invent new programming language typed_ruby and give him separate life, and then history will prove us whether it should have been implemented in the first place. But not try to push it into stdlib and persuade everyone else that it is the right way to go.
Typescript's success has much more to do with JavaScript being the only choice we have in the browser than it has to do with particular programming language design that we must seek to replicate.
it's amazing to me that the industry hasn't found a better way to support bits of code interacting with other bits of code properly, than adding an attribute to a bit of code calling it a "type"
...saying this as someone who benefits from it but also rarely uses sub-typing ("poodle" type sub to "dog" type sub to "animal" type) or any sort of the other benefits that are commonly associated with typing. for me the benefit is code checking and code navigation of unfamiliar code bases
structural typing is a good step - but afaik there's no way to distinguish between different types that have the same structure but different context - a "truth.right" will be different from a "direction.right", but it's up to the dev to keep that straight.
duck typing is what ruby does without sorbet or rbs, it portrays nothing about how the code bits interact at the boundary. if a "dog" is passed in but a "cat" is expected, things will work just fine until runtime when the cat is asked to bark. (saying as someone who is a big fan of ruby overall)
It is developer responsibility to ensure that you will not receive cat during designing stage of your classes. And inheritance is bad, this is also sharp knife. But annotating everywhere
sig { params(cat: Cat) }
does not improve your design, it just makes noisy and clumsy. I would think that if your code need type annotations, it smells like bad design and should be considered for refactoring.
One thing that I've wondered is why sorbet didn't choose to use the stabby lambda syntax to denote function signatures?
Obviously this opens up a potential can of worms of a dynamic static type system, but it looks sufficiently close enough to just ruby. My opinion is that sorbet doesn't lean into the weirdness of ruby enough, so while it has the potential to be an amazingly productive tool, this is the same community that (mostly) embraces multiple ways of doing things for aesthetic purposes. For example you could get the default values of the lambda above to determine the types of the args by calling the lambda with dummy values and capturing via binding.Personally having written ruby/rails/c#/etc and having been on a dev productivity team myself, I say: lean into the weird shit and make a dsl for this since that's what it wants to be anyways. People will always complain, especially with ruby/rails.
I don’t know if there’s a Ruby reflection API to produce the values of default keyword arguments in a lambda, but correct me if that’s not the case. This syntax is neat enough to be worth spending some time thinking about if you can correct me.
When it comes to "Ruby-like and statically typed", Crystal is such an amazing language. I think the design is incredible, but every time I try to use it, I just hit so many issues and things that slow me down. I think it's such a cool language, but every time I try to use it I just end up switching back to Ruby.
Same! I reach for crystal when I need something that is going to perform faster than ruby, but man I get my hand slapped constantly when writing it.
I love Crystal. It's what I wish Ruby was. If only the tooling was a bit better
Crystal could have been big if it worked really well in combination with Ruby out of the box (f.i. as an extension language).
Shout out to the crystalruby[1] project. I’ve not used crystal much myself but I love how easy it is to switch between the two.
1. https://github.com/wouterken/crystalruby
I've been a ruby dev for 15+ years; i'd really love it if ruby adopted a C# similar approach to typing (albeit, more ruby-like and more flexible). It's the most readable, simplest way I would enjoy as a rubyist. Everything else (including sorbet) feels bolted on, cumbersome and cringe. I appreciate the article and how it goes over the constraints; but genuinely sorbet is just not good enough from a DSL standpoint.
Type's can be fun and useful, and i'd love to see them incorporated into ruby in a tasteful way. i don't want it to become a new thing developers are forced to do, but there is a lot of utility from making them more available.
While I'm most familiar with C#, and haven't used Ruby professionally for almost a decade now, I think we'd be better off looking at typescript, for at least 3 reasons, probably more.
1. Flowsensitivity: It's a sure thing that in a dynamic language people use coding conventions that fit naturally to the runtime-checked nature of those types. That makes flow-sensitive typing really important.
2. Duck typing: dynamic languages and certainly ruby codebases I knew often use ducktyping. That works really well in something like typescript, including really simple features such as type-intersections and unions, but those features aren't present in C#.
3. Proof by survival: typescript is empirically a huge success. They're doing something right when it comes to retrospectively bolting on static types in a dynamic language. Almost certainly there are more things than I can think of off the top of my head.
Even though I prefer C# to typescript or ruby _personally_ for most tasks, I don't think it's perfect, nor is it likely a good crib-sheet for historically dynamic languages looking to add a bit of static typing - at least, IMHO.
Bit of a tangent, but there was a talk by anders hejlsberg as to why they're porting the TS compiler to Go (and implicitly not C#) - https://www.youtube.com/watch?v=10qowKUW82U - I think it's worth recognizing the kind of stuff that goes into these choices that's inevitably not obvious at first glance. It's not about the "best" lanugage in a vacuum, it's a about the best tool for _your_ job and _your_ team.
Tangent, has C# recovered from nulls being included in all reference types by default?
"Recovered" sounds so binary.
I think it's pretty usuable now, but there is scarring. The solution would have been much nicer had it been around from day one; especially surrounding generics and constraints.
It's not _entirely_ sound, nor can it warn about most mistakes when those are in the "here-be-dragons" annotations in generic code.
The flow sensitive bit is quite nice, but not as powerful as in e.g. typescript, and sometimes the differences hurt.
It's got weird gotcha interactions with value-types, for instance but likely not limited to interaction with generics that aren't constrained to struct but _do_ allow nullable usage for ref types.
Support in reflection is present, but it's not a "real" type, and so everything works differently, and hence you'll see that code leveraging reflection that needs to deal with this kind of stuff tends to have special considerations for ref type vs. value-type nullabilty, and it often leaks out into API consumers too - not sure if that's just a practical limitation or a fundamental one, but it's very common anyhow.
There wasn't last I looked code that allowed runtime checking for incorrect nulls in non-nullable marked fields, which is particularly annoying if there's even an iota of not-yet annoted or incorrectly annotated code, including e.g. stuff like deserialization.
Related features like TS Partial<> are missing, and that means that expressing concepts like POCOs that are in the process of being initialized but aren't yet is a real pain; most code that does that in the wild is not typesafe.
Still, if you engage constructively and are willing to massage your patterns and habbits you can surely get like 99% type-checkable code, and that's still a really good help.
> Related features like TS Partial<> are missing, and that means that expressing concepts like POCOs that are in the process of being initialized but aren't yet is a real pain; most code that does that in the wild is not typesafe.
If it's an object, it's as simple as having a static method on a type, like FromA(A value) and then have that static method call the constructor internally after it has assembled the needed state. That's how you'd do it in Rust anyway. There will be a warning (or an error if you elevate those) if a constructor exits not having initialized all fields or properties. Without constructor, you can mark properties as 'required' to prohibit object construction without assignment to them with object initializer syntax too.
Yeah, before required properties/fields, C#'s nullability story was quite weak, it's a pretty critical part of making the annotations cover enough of a codebase to really matter. (technically constructors could have done what required does, but that implies _tons_ of duplication and boilerplate if you have a non-trivial amount of such classes, records, structs and properties/fields within them; not really viable).
Typescript's partial can however do more than that - required means you can practically express a type that cannot be instantiated partially (without absurd amounts of boilerplate anyhow), but if you do, you can't _also_ express that same type but partially initialized. There are lots of really boring everyday cases where partial initialization is very practical. Any code that collects various bits of required input but has the ability to set aside and express the intermediate state of that collection of data while it's being collected or in the event that you fail to complete wants something like partial.
E.g. if you're using the most common C# web platform, asp.net core, to map inputs into a typed object, you now are forced to either expression semantically required but not type-system required via some other path. Or, if you use C# required, you must choose between unsafe code that nevertheless allows access to objects that never had those properties initialized, or safe code but then you can't access any of the rest of the input either, which is annoying for error handling.
typescript's type system could on the other hand express the notion that all or even just some of those properties are missing; it's even pretty easy to express the notion of a mapped type wherein all of the _values_ are replaces by strings - or, say, by a result type. And flow-sensitive type analysis means that sometimes you don't even need any kind of extra type checks to "convert" from such a partial type into the fully initialized flavor; that's implicitly deduced simply because once all properties are statically known to be non-null, well, at that point in the code the object _is_ of the fully initialized type.
So yeah, C#'s nullability story is pretty decent really, but that doesn't mean it's perfect either. I think it's important to mention stuff like Partial because sometimes features like this are looked at without considering the context. Most of these features sound neat in isolation, but are also quite useless in isolation. The real value is in how it allows you to express and change programs whilst simultaneously avoiding programmer error. Having a bit of unsafe code here and there isn't the end of the world, nor is a bit of boilerplate. But if your language requires tons of it all over the place, well, then you're more likely to make stupid mistakes and less likely to have the compiler catch them. So how we deal with the intentional inflexibility of non-nullable reference types matters, at least, IMHO.
Also, this isn't intended to imply that typescript is "better". That has even more common holes that are also unfixable given where it came from and the essential nature of so much interop with type-unsafe JS, and a bunch of other challenges. But in order to mitigate those challenges TS implemented various features, and then we're able to talk about what those feature bring to the table and conversely how their absence affects other languages. Nor is "MOAR FEATURE" a free lunch; I'm sure anybody that's played with almost any language with heavy generics has experienced how complicated it can get. IIRC didn't somebody implement DOOM in the TS type system? I mean, when your error messages are literally demonic, understanding the code may take a while ;-).
nope, still recovering: https://github.com/dotnet/core/blob/main/release-notes/10.0/...
This has nothing to do with null analysis. It simply lets you replace an assignment behind an if with an inline expression.
Yes. It will complain if you assign one to something that isn't T?.
For the best experience you may want to add `<WarningsAsErrors>nullable</WarningsAsErrors>` to .csproj file.
wow i certainly appreciate your perspective and insight as a regular C# developer! My experience was limited to building a unity project for 6 years and learning the differences from Ruby.
Another commenter suggested another language like crystal, and that might actually be what it really needs, a ruby-like alternative.
I love building libraries, so having the chance to talk about the gotchas with things like this is a fun chance to reflect on what is and is not possible with the tools we have. I guess my favorite "feature" in C# is how willing they are to improve; and that many of the improvements really matter, especially when accumulated over the years. A C# 13 codebase can be so much nicer than a c# 3 codebase... and faster and more portable too. But nothing's perfect!
> i'd really love it if ruby adopted a C# similar approach to typing
interesting that you've called out C# as the specific example of a way to approach typing. c# did nothing new with their typing system, instead copied from java and others https://en.wikipedia.org/wiki/Comparison_of_C_Sharp_and_Java...
“Flexible” scares me with regard to types. Can you elaborate on that thought?
What parts of a C# approach do you think Ruby should adopt? The syntax? I don't think anyone thinks of C# as having a particularly good or noteworthy type syntax or features—anything you can find in C# you can probably find in most other typed languages. I'm curious what specifically you think C#'s typing system is better at than e.g. Typescript, Python or Rust
C#, ew. Crystal (Ruby but statically typed) has nicer type syntax. Or Pony. Or Nim, Odin, and others.
Crystal still has tooling problems and compiling time problem. Both not easily solvable. May be Kagi will grow to certain size some day they could invest back into Crystal.
Hrm, i think this is a good point, It might be something better served by a ruby-like alternative. I'll have to tinker with crystal sometime. Thanks for this comment!
Interesting article, but to me it really defeats the point of Ruby. The hyper-dynamic "everything is an object" in the Smalltalk sense of the definition is much of what makes Ruby great. I hate this idea that Ruby needs to be more like Python or Typescript; if you like those languages use those languages.
I get that a lack of types is a problem for massive organizations but turning dynamic languages into typed languages is a time sink for solo developers and small organizations for zero performance benefit. If I wanted a typed language to build web apps with I'd use Java or something.
Hopefully Matz sticks to his guns and never allows type annotations at the language and VM level.
It’s all about message passing. Why not allow a compile time check if something respond to a message. Types/interfaces are good. Failing early is a good thing. The earliest you can fail is during editing, and then compile/interpret time.
In Objective-C, you also pass messages to an object, and it could be anything you want. But it would output warnings that an object/class did not respond to a certain message. You could ignore it, but it would always result in a runtime error obviously.
Swift is a much nicer language in terms of typing, and I have started replacing some smaller Ruby scripts with Swift. Mostly out of performance needs though. (Single file swift scripts)The lack of proper typechecking is one of the main reasons I would not use Ruby, especially in larger teams.
Unittests are fun and all, but they serve a different purpose.
> The lack of proper typechecking is one of the main reasons I would not use Ruby, especially in larger teams.
Cool. I'm using Ruby for a startup because it's 10x more productive for building a webapp versus a compiled language and because I'm solo.
And when I want typing I use something like C++ because I don't want to lose productivity without a major performance boost.
The productivity and joy come from the Ruby syntax, its standard library, most gems, and Rails.
That is the best thing about Ruby, not the lack of being able to add type checks or interfaces
> I'm using Ruby for a startup because it's 10x more productive for building a webapp versus a compiled language
I’ve been a Ruby/Rails dev for about 14 years. Just started working on Java projects at work and… holy shit, it slows down the entire organization and they don’t even seem to know how much of it is because of the terrible immovable object that is the mountain of Java code they made. Before this experience I had always assumed that rigid types alone would be a boon… oh how wrong I was. The rigidity just leads to people writing 10x more garbage to get simple things done, which causes a 10x worse maintainability problem. The rate at which I hear “oh lol just another NPE” is astounding, how is this constant occurrence not interpreted as a complete failure of the “type” system????
Maybe a crack team of Java Experts wouldn’t do this, but the reality of most software companies is that people of ALL skill levels are writing code. Poorly written Ruby is a whole lot easier to figure out (as an expert of Ruby) than 10x the amount of code in Java. I hate this.
I find types useful especially for smaller solo projects: because the test suite is much less complete, or even non-existent. Types give quick feedback for a bunch of silly but common errors.
It's entirely unclear to me that types reduce developer velocity on small projects; it seems roughly the same to me. I certainly don't see how it's a "time sink".
But in the end: if you don't like it, just don't use it? Ruby has never been about restricting how people use the language.
> Hopefully Matz sticks to his guns and never allows type annotations at the language and VM level.
Why? Just because they _exist_ doesn't mean you'd need to use them.
> I hate this idea that Ruby needs to be more like Python or Typescript
It's not be more like those, it's be more like helpful, author-friendly programming which is very much Ruby's ethos.
Every time I think about ripping out all of the RBS sig files in my project because I'm tired of maintaining them (I can't use Sorbet for a few reasons), Steep catches a `nil` error ahead of time. Sure we can all say "why didn't you have test coverage?" but ideally you want all help you can get.
As a Pythoner the most direct value I get out of types is the IDE being smart about autocompletion, so if I'm writing
I can type session. and the IDE knows what kind of object it is and offers me valid choices. If there's a trouble with it, it's that many Pythonic idioms are too subtle, so I can't fully specify an API like not to mention I'd like to be able to vary what gets returned using the same API as SQLAlchemy so I could write and have the type system know it is returning an iterator that yields iterators that yields rows as opposed to an iterator that yields rows. It is simple, cheap and reusable code to forward a few fetch-control arguments to SQLAlchemy and add some unmarshalling of rows into documents but if I want types to work right I need an API that looks more like Java.If I understand correctly, you can do this with overloads. They don't change the function implementation, but you can type different combinations of parameters and return types.
> Sure we can all say "why didn't you have test coverage?"
Well types are a form of test performed by the compiler.
LLMs can probably help maintain them so that probably could be solved if you start using LLMs more.
This maybe already exists, but it would be nice if RBS or Sorbet had a command you could run that checks that all methods have types and tries to 'fix' anything missing via help from an LLM. You'd still be able to review the changes before committing it, just like with lint autofixing. Also you'd need to set up an LLM API key and be comfortable sharing your code with it.
I respect this view but I can really see the utility of gradual typing as your project grows and matures.
I also wonder if we’ll eventually be able to pass type information to the JIT. If that helps ruby grow I’m all for it.
> I also wonder if we’ll eventually be able to pass type information to the JIT.
It's not like Ruby doesn't have types and the JIT can't determine them. It just happens at runtime, literally the defining feature of dynamic languages.
That’s true but it would be useful to upfront specify exactly what’s in an array and exactly how big it will be for the entire life of the array.
> There was also a project called TypedRuby, largely a passion project of an engineer working at GitHub. After a few weeks of evaluation, it seemed that there were enough bugs in the project that fixing them would involve a near complete rewrite of the project anyways.
There's 6 open bugs and 4 closed ones. This seems like either it's throwing shade or they didn't bother lodging bug reports upstream.
Last commit: 2018
Readme:
>Note: TypedRuby is not currently under active development.
I think "brief check, move on" is reasonable, particularly if it doesn't appear to already be at several-year-stable quality.
We were in direct contact with the author, most bugs were reported over email. The project was a hobby project and we did not feel it to be prudent to flood someone’s hobby project with dozens of issues created by a team working at a venture funded startup.
man i always bounce between loving ruby's speed to build and wishing for just a bit more built-in safety tbh. you ever feel like adding more types actually slows you down or does it make big teams way less stressed?
I've been a Ruby dev since 2006 and started using Sorbet fulltime at work about a year ago. After about six months I started wanting it in my personal projects. Not enough to add the dependency, but if Ruby had it built-in I would probably use it. For myself, it makes it easier to work on code I'm not familiar with either because I didn't write it or I wrote it longer than X days ago.
The Sorbet syntax is pretty bad, though. And things go from bad to worse as soon as you get a bit tricky with composition or want to do something like verify the type of an association.
I haven't tried inline RBS comments yet, but the DSL does look more pleasing.
The team in general had mixed views. Some hated it, some liked it, most developers just don't really care–although now the Sorbet advocates are claiming it helps the AI, so now there is that leaning on the Sorbet lever as well.
> My counter is that when it comes to language design, semantics—what the types mean—are easily 10 times more important than syntax
Sometimes I long for the days before type theory took over programming language research.
You don't need fancy boy theories to see value in types.
I… didn’t say I saw no value in types?
I meant that there was a culture shift around program language research that had given advances in syntax a permanent back seat.
The value in types is telling the compiler to make certain assumptions so it can optimize the output.
There's not a ton of value to be had in something like Typescript or Python types.
I think this is a beginner take.
If you only work in small teams, or small projects, I could see this opinion getting formed and maybe agree.
But having worked on large projects... Types are fucking great. So much value exists in simply having a tool that highlights everywhere in the codebase my change impacts.
It makes refactoring at scale possible and expedient in a way that is simply better. Period.
I can be fearless changing things, because I don't have to wonder about all the places my change might have broken - it literally gives me the exact set of files and lines I need to go fix. I don't even have to rely on tests.
It's... Better. It's better even for my personal projects, and it's unimaginably better for my 300 engineer org.
> The value in types is telling the compiler to make certain assumptions so it can optimize the output.
Correctness is more important (in general and as a benefit of type checking) than optimization. Doing the wrong thing fast is easy, but not valuable.
Perhaps Devil's advocate, but my rebuttal would be that we're not necessarily comparing similar gains to correctness as to optimisation, and that perhaps the correctness gains can be better attained another way where the optimisation gains cannot.
(Personally I find a type checker tremendously valuable for communicating with myself - and my team, I suppose - about assumptions being made in other parts of the program, which can probably be accounted "correctness".)
Yet TS is massively popular. So maybe that’s not the only value?
Popular != Good.
Also what's good for enterprise isn't good for everyone.
I get that orgs probably like TS so that newbie devs don't do crazy things in the code, and for more editor hand-holding. But it's not valuable for everyone, if it was actually better than everyone would be using it, not just some people.
I'm a professional developer with 20 years of experience and I wouldn't dream of starting a new side project—even with myself as the only developer—without types.
I've learned by hard experience that past me was an idiot and future me is clueless. Types are executable documentation that I can leave behind for future me so that future me can get instantaneous feedback while refactoring the project that past me wrote.
Any reason you stick to ruby? Why not use any statically typed language which offers this feature out of the box? We don't argue against typing. We argue about typing like it is done in ruby rbs/sorbet.
I don't use Ruby. I would if it had types. My language of choice right now is TypeScript because it has the best balance of types and dynamism.
[flagged]
Talking about rails, which is what the vast majority of ruby is used for:
- the dynamic piece is exactly what allows you to ship quickly
- most slowness is due to waiting for IO (database or external API calls) and can be easily solved by handling expensive work in the background
- concurrency is solved by having more web workers, and web workers are cheap (database is the bottleneck anyway)
But the amount of money going by is so huge they can take a tiny percent of it to pay their server bills.
Dynamic languages in large systems have been controversial since there have been dynamic languages. It was routine for heroic AI systems of the golden age (1980s) to be prototyped in Lisp and then be rewritten in C++ for speed. You'd hear stories about the people who wrote 500,000 lines of Tcl/Tk to control an offshore oil drilling rig and the folks who think "C++ is a great language for programming in the large" but who don't seem to mind having a 40 minute build.
For example
https://erwan.lemonnier.se/talks/pluto.htmlhttps://janvitek.org/talks/dls09.pdf
> But the amount of money going by is so huge they can take a tiny percent of it to pay their server bills.
I think that's an oversimplification that doesn't do justice to the complexity or difficulty of the issue. Both sides of the argument can make a really good case.
Slow programs (due to language or other factors) increase server costs, but that's only part of the problem. They also increase operational and architectural complexity, meaning you spend more time, effort and specialized expertise on these secondary effects. Note that some of that shimmers through in the article, mentioning "CI flakiness": It's probable that the baseline performance and its effects is a factor there. Performance has a viral effect on everything, positively or negatively. They also complain about async not being a part of the story here, which is also a performance issue.
On the other hand, this shop has hundreds of developers. That means deep expertise, institutional knowledge, tooling, familiarity with each line of code and a particular style that they follow. Ruby is also a language that is optimized for programmer experience, and is apparently a fun language to work with. All those things are very essential investments, individually and as a whole. Then we can assume that they use Ruby like most glue languages are used and either write C/C++ modules where appropriate or attach high performing components into their architecture (DBs, queues, caches etc.)
The reason I'm harping on this point is because I think it's important to acknowledge a more complete reality than the "hardware is cheap, people are expensive" statement. I assume it's very likely a throwaway line and not a hill you'd die on. But it masks two interesting and important things: Performance matters a lot and impacts cost transitively, but people and culture ultimately dominate for very good reasons.
I think you could have very happy devs working on a system like Stripe in either Java or Ruby, the choice of language is less important than everything else.
Personally I would judge an engineering manager on how seriously they take issues with the build. The industry standard is that a junior dev having trouble with the build is told he's on his own, the gold standard is that he gets a checklist and if the checklist doesn't work it is treated as if the dev had said "I have cancer".
A fast language can be slow to build. [1] If you were really designing a system for long term maintainability fast and sensible tests and solid CI would be determining factors. The only velocity metric which needs to be measured is the build time.
At the moment I have a side project I'm doing which is a port of the arangodb API to postgresql intended to bring a few arangodb applications into the open source world. I test at the level where I make a new postgres db and do real operations against it because I really need to do that to soak up all the uncertainty in my mind, never mind test the handful of stored procs that the system depends on. I have 5 big tests instead of the 300 little tests I wish I had because it takes 0.2s to set up an tear down the fixtures and if the tests take too long to run that's really a decision that I'm not going to do any maintenance on the thing in the future.
I am working on a React system which is still in an old version where test frameworks can't tell if all the callbacks are done running which means practically I don't write tests because even though I'm a believer in tests, it already takes 50 sec to run the test suite and frequently as much time to update tests as it takes to make changes. Had I been writing tests at the rate I want my tests would take 500 sec to run.
[1] Though I'd argue that performance concerns are one reason we can't have nice things when it comes to flexible parsers and metaprogramming; Python's use of PEG grammars has so far fallen so short of what it should have been because of these concerns, no unparser to go with the parser, no "mash up SQL or some other language into a certain slot in the Python grammar"
Shopify also uses Ruby (and Rails). Stores shard pretty well so it’s easy to scale, and then you get the productivity gains of writing in Ruby.
Indeed. But I guess having something like 20 million lines of Ruby code and the people that built that, means it makes more sense to spend time on making Ruby safer than re-implementing it all in something like Java.
Especially given the current code base is battled tested, while any re-write would almost inevitably introduce bugs.
With something like Ruby you can also just reimplement certain functions in the compiled language of your choice. The nice thing about dynamic languages are that they're great scripting languages and great in a polyglot codebase.
As a meta comment (responding to your question about downvotes): This is a very, very frequent topic of discussion any time Shopify, Stripe, Github or another huge Ruby shop make a technical post that's interesting to Rubyists. It's very annoying to have to retread the exact same discussion over and over again just because some people have to make this comment every time Ruby is even tangentially mentioned. So people aren't downvoting because they blindly disagree, they're downvoting because this is an uninteresting comment to leave on this article specifically.
Additionally meta: I would also say that complaining about downvotes, generally tends to bring more downvotes.
it got flagged too, which seems like an inappropriate use of flagging
most web services aren’t CPU bottlenecked, and ruby has great testing libraries
[flagged]
Ruby is a language that grants you lots of freedom. Including often the freedom to shoot your own foot. Bolting on types to it is something I expect to be possible in Ruby and completely in tune with one or both my previous statements.
What about wisdom and pragmatic approach? If you ever need types why not choose typed language for your case, but instead re-implementing something and wasting so much time for work that has no real use. Why not use c-bindings for typechecks, this feature is already available in ruby *rb_obj_is_kind_of(value, rb_cString);* and has no cost. Is it wise? I don't think so, I would be shocked if I saw such code in production.
We care so much about environment and climate changes, but think it is ok to waste resources for such activity. It is not research task. People really advocate to use in real-world applications such thing as sorbet. DHH even removed typescript from his frontend libraries, and this types is like kindergarten for some homemade rack/hanami/dryrb application that will server 2 people in production (creator of this stuff and random visitor). Maybe someone could tell me why we need such thing in ruby ecosystem like rbs/sorbet library. I'm really wondering. Not because ruby grants us freedom and we can do that. That's not an answer. We can do enormous count of absurd things. But the question is why?
Woah there, you're making a straw man here but I'll bite. I'll preface it with the fact that I also think types are against the very nature of Ruby as I know and practice it.
But, why not choose a proprer statically typed language? Because you've inherited a mastodon of a legacy app that started simple, then matured with thick layers of features. How can you wrangle this? Oh just rewrite it in Rust. Or you could seek an iterative approach and introduce tools to help you. Like bolted-on types.
Oh and I'm not sure I follow what all this about the environment is... If one is so conscious about it, maybe choose a different field entirely?
> Why one would try to make ruby "look like" statically typed language? It violates the idea of ruby, which dynamically typed language.
People said the exact same things when Typescript was first being released. I think history has proven all those people decisively wrong. I think you'd need a compelling reason that Ruby has some je ne sais quoi that Javascript doesn't in order to support your point.
The thing is I don't want ruby to meet the fate of javascript. Javascript community is much bigger than ruby will ever be. They had to manage so many people writing too many code, and invented tool for them so they can contribute without making them learn completely new programming language. Easier to jump in. And I should mention typescript types is much better than what we have in rbs/sorbet. They have types inside language, without annotation like what we have separate sig DSL in ruby.
We should invent new programming language typed_ruby and give him separate life, and then history will prove us whether it should have been implemented in the first place. But not try to push it into stdlib and persuade everyone else that it is the right way to go.
Plenty of people dislike TypeScript, and don't feel the supposed benefits outweigh the costs.
Typescript's success has much more to do with JavaScript being the only choice we have in the browser than it has to do with particular programming language design that we must seek to replicate.
Sorbet is intended for people who have no choice but to use ruby.
> It becomes more verbose with this type annotation everywhere making it look like nightmare.
this is part of the appeal of the "header file approach" a la RBS, as mentioned in the article https://blog.jez.io/history-of-sorbet-syntax/#the-header-fil...
it's amazing to me that the industry hasn't found a better way to support bits of code interacting with other bits of code properly, than adding an attribute to a bit of code calling it a "type"
...saying this as someone who benefits from it but also rarely uses sub-typing ("poodle" type sub to "dog" type sub to "animal" type) or any sort of the other benefits that are commonly associated with typing. for me the benefit is code checking and code navigation of unfamiliar code bases
structural typing? duck typing?
structural typing is a good step - but afaik there's no way to distinguish between different types that have the same structure but different context - a "truth.right" will be different from a "direction.right", but it's up to the dev to keep that straight.
duck typing is what ruby does without sorbet or rbs, it portrays nothing about how the code bits interact at the boundary. if a "dog" is passed in but a "cat" is expected, things will work just fine until runtime when the cat is asked to bark. (saying as someone who is a big fan of ruby overall)
With great power comes great responsibility.
It is developer responsibility to ensure that you will not receive cat during designing stage of your classes. And inheritance is bad, this is also sharp knife. But annotating everywhere
does not improve your design, it just makes noisy and clumsy. I would think that if your code need type annotations, it smells like bad design and should be considered for refactoring.