One thing I'd suggest, for any hardware product, is that when doing your bill of materials to provide links and show estimated costs. Sure, these will change but having a rough idea of the costs is really helpful, especially when perusing on from things like HN. It can be a big difference for someone to decide if they want to try it on their own or not. It is the ballpark figures that matter, not the specifics.
You did all that research, write it down. If for no one but yourself! Providing links is highly helpful because names can be funky and helps people (including your future self) know if this is the same thing or not. It's always noisy, but these things reduce noise. Importantly, they take no time while you're doing the project (you literally bought the parts, so you have the link and the price). It saves yourself a lot of hassle, not just for others. Document because no one remembers anything after a few days or weeks. It takes 10 seconds to write it down and 30 minutes to do the thing all over again, so be lazy and document. I think this is one of the biggest lessons I learned when I started as an engineer. You save yourself so much time. You just got to fight that dumb part in your head that is trying to convince you that it doesn't save time. (Same with documenting code[0])
Here. I did a quick "15 minute" look. May not be accurate
Lidar:
One of:
LD06: $80 https://www.aliexpress.us/item/3256803352905216.html
LD19: $70 https://www.amazon.com/DTOF-D300-Distance-Obstacle-Education/dp/B0B1V8D36H
STL27L: $160 https://www.dfrobot.com/product-2726.html
Camera and Lens: $60 https://www.amazon.com/Arducam-Raspberry-Camera-Distortion-Compatible/dp/B0B1MN721K
Raspberry Pi 4: $50
NEMA17 42-23 stepper: $10 https://www.amazon.com/SIMAX3D-Nema17-Stepper-Motor/dp/B0CQLFNSMJ
That gives us $200-$280 before counting the power supply and buck converter.
[0] When I wrote the code only me and god understood what was going on. But as time marched on, now only god knows.
Learning projects like this is about to get a lot less accessible due to the extreme tariffs and elimination of the de minimis exemption. Take that BOM and multiply it by 2X or 3X depending on the source and how many different shipments arrived.
I can’t tell you how depressing it is to go from having access to cheap learning materials for introducing kids (and adults) to electronics, and now it’s being taxed away in the name of improving the US competitiveness or something. Total footgun.
Doesn't change the fact that the advice is still beneficial. At worst you still have a good history of the effect of these tariffs.
I'd call the tariffs the second death of hardware though. The first was when we killed all the parts stores. That was a slower death, coupled with the loss of right to repair. But we've been making big strides in that domain, so I hope we can undo that death. If we can also undo the dumb tariffs too then ironically we might have a chance to bring back hardware which somewhat seems inline with what that party (claims to/pretends to) wants.
A new Radio Shack dealer just opened in a rural area, with an emphasis on radio (it's near an off-roading park), and your comment reminded me what terrible timing this is for them. Such a cruel twist.
This is the most ungrateful comment I've read today, harping away about how 'it should have been done'.
Well you fucking do it then.
I know that my time is so short (because I have a family) that if I can even do a project then I'm almost certainly not going to document it because getting it done will be enough of a stretch for me, and if I need to come back and re-do it again, I am probably not going to even bother. Not all of us live in mom's basement and have the luxury of extra time.
It was a general suggestion for everyone doing hardware projects and OP did a lookup and provided the additional info / links, which sparked further discussions.
What is the HN opinion on Tesla skipping lidar? Having spent some time with computer vision in university I think it's insane to skip it - sure stereo reconstruction is powerful but lighting conditions have such an impact on results that having some robust depth data feels like a no-brainer and skipping it feels like malignant neglect.
As someone who's done a lot of computer vision, it is insane to skip it. And it's sad because what everyone missed from that viral Mark Rober video [0] was not the Looney Toons wall hit but the fucking kid in the smoke. Add all the cameras and AI you want, you ain't changing the laws of physics: visible light doesn't penetrate smoke. But radar does. Every (traditional) engineer knows that safe systems have redundancy. That safe systems have redundancy through differing modalities. Use cameras, but also use radar, lidar, and even millimeter wave. Using just cameras isn't just tying one hand behind your back, it's shooting yourself in the kneecap afterwards
The argument is that humans manage the task without lidar, and automation doesn't have to be perfect it just has to be better than humans, to be a net positive. It seems to me, you might as well use lidar if it's cheap enough, but the argument that computer systems can outcompete human drivers, without using lidar, is at least reasonable, although not yet proven.
We could extend the argument more. Why build a self driving vehicle at all? Build a humanoid robot to drive the car for you! The argument that computer systems can outcompete human drivers, without using lidar, is at least reasonable, although not yet proven
(I didn't just want to just make sure - this is a stab)
It's a dumb argument on multiple accounts. While it's a routine argument in software engineering it's an argument that often will get people fired and sued in testimonial.
First off, humans don't do it "just by vision". Sure, we don't have lidar but we have hearing, we have touch, we have tons of experience. We can create world models for Christ's sake and that means modeling physics. I'm sure you've seen papers that claim world models but I'm a ML researcher who also has a physics degree and I'm not afraid to tell you that's bullshit. It's as honest as Altman calling GPT PhD level intelligence. A PhD has very little to do with the ability to recall information.
Second off, it doesn't matter much how humans do it. It matters how the car can. Why limit yourself. There's tons of cars with radar and lidar. They're not more expensive and they can see an object in fog or poor light conditions. It can do something humans can't do! Why in the world would you decide not to do that. You can make an argument about price but that argument changes when that thing becomes cheaper. When that happens you're now just someone adding danger for no reason. You can't argue that only cameras will be safer. It categorically isn't. The physics is in your way.
But that is the argument made when Tesla first said they were going to use only cameras. Because everyone knew lidar would come down with scale and that's why many other manufacturers went in that direction. Which is mutually beneficial, so Tesla would benefit from joining.
> can outcompete human drivers
Third, be careful with those claims. I'm more willing to believe 3rd party reports like from NHSTA than directly from Tesla [0]
Is Tesla getting into legal mess if they need to add sensors to make self driving work when they already sold that feature to car owners? Would that imply that they need to retrofit already sold cars with upgraded sensor packages?
Yes, and this is already turning out to be a problem for them. They've acknowledged that HW3 is not sufficient, and will be on the hook for those who bought the FSD package with those cars.
That isn't the end of the world, but it'd turn into a much bigger problem if they also had to add additional sensors and body modifications to support those sensors.
There will be time in the very near future (read five years time) people will not buy vehicle (car, bike, etc) without lidar as the price become insignificant as car reverse camera, and it become commonplace.
Personally now I'll not buy any vehicle without assisted camera parking and apparently many people will agree with this important feature including Marques Brownlee [1].
[1] Reviewing my First Car: Toyota Camry Hybrid! [video]:
Radar technology offers a range of applications, including the ability to detect objects around corners, behind obstacles such as brick walls, and even penetrate human bodies at specific frequencies. However, when multiple sensors yield similar results, it becomes challenging and costly to discriminate which results are valid.
Operating radar at a specific frequency, such as 2.45 GHz (a microwave frequency often used due to its affordability), can be ineffective in environments rich in water droplets (e.g., rain), as these can dominate the radar signals. Higher frequencies enable the detection of smaller water droplets, but switching between frequencies can be expensive. Additionally, varying the radar's detection range to identify objects of different sizes complicates the calculations, involving factors such as minimum and maximum range, power, and time on target.
Cameras typically detect non-moving objects by comparing successive images. In contrast, radar can identify both stationary and moving objects and determine their direction relative to the sensor by emitting a frequency and analyzing the reflected pulses. Lidar, on the other hand, uses light to measure the distance to objects in its path, employing a photoreceptor to capture the reflected light.
I can only speak for myself, but I work on this stuff in this industry: Tesla’s choice is asinine at this point. It’s one thing to claim cameras only and find that won’t work and pivot, but they are so dug in that they can’t admit they were wrong and won’t do so. So it’s asinine now.
>It’s one thing to claim cameras only and find that won’t work and pivot, but they are so dug in that they can’t admit they were wrong and won’t do so. So it’s asinine now.
Didn't they start with lidar or radar or similar and then go back to only using vision based technologies?
Well I haven't used taste nor smell in my driving yet. Touch only as far as vibration and steering wheel torque (both not difficult to sense with electronics).
A similar situation to Jobs reciting research on how optimal the one button mouse is. A thought bubble.
Why, we will perhaps never know. But likely they were early and it was deemed too expensive back then, or didn't find a supplier they could work with. Now there's too much prestige in it and they can never back down which would be admitting to a mistake.
It would be one thing if it was a one time event but then they repeated that playbook with the lack of a rain sensor.
I think it was a valid decision that turned out to be incorrect and is staying put as a result of stubbornness. People really like criticizing decisions in hindsight especially here where the armchair engineer with the benefit of hindsight is too common.
People have been criticizing this decision from the get go. It may have upgraded from engineers to the general public but let's be honest, the latter doesn't matter in a topic like this anyways
It was a valid experiment and pushed computer vision but it clearly failed a long time ago. The fact that Teslas are not only still sold without lidar but “autopilot” is pushed as safe is disgusting.
People will die (and have already died) horrifically because of this decision. It’s morally bankrupt.
This isn't too expensive for Tesla, it's just nowhere near the level needed for an AV. Automotive lidars are 10-20 scans/second, rated for dust/rain/etc, and need a range of at least 50 meters, but 100-200 is more ideal. Not a fan of Tesla's approach, but I wanted to clarify that it's not like they can just use a lidar like this and call it a day. The specs are completely different and that really drives up cost!
> requirements for electronics in a car are pretty extreme
+ the salaries of everyone working on that stuff, not just assembly but also writing the code to support it
Not that I disagree, either: at the volumes that a modest car company puts out, I'd assume it's easily worth the, say, 3% cost premium on the car's total price to have something that can actually see things you don't see and thus makes a safer system. It might even reduce costs by having lower requirements for the vision hardware and software, but that's not something I can know. There's a lot of unknowns here that I think mean we can't really do a good comparison indeed
That hasn’t stopped Tesla before. They have a track record of treating automotive-grade quality standards as optional when doing electronics sourcing[1].
As the article notes, Tesla conveniently “fixed” their thermals and durability issue that caused by inventing a feature called cabin overheat protection and marketing it as for people/animals overheating and not for the non-automotive-spec electronics in the cabin.
If you can’t bring auto quality electronics to the car, just change the car so it avoids standard auto thermal conditions ¯\_(ツ)_/¯
They wanted to sell “self driving ready” packages 10 years ago, when LiDAR actually was expensive. So at the time, they had to make big deal about LiDAR being unnecessary.
Do we really need LiDAR in a Tesla? I own a Chevy Trax and it has LKAS and ADAS. Not even using LiDAR just sensor fusion with camera and radar. It’s a cheap car too. It’s car assisted driving.
I have driven a Tesla once but not with the added feature.
Not by that much current generation hardware for cars is $500-700. And some of the oem expect to bring it price down below $200 with the next generation equipment. Now that BYD put self driving in almost every car it will supercharge adoption and lidar prices might drop even a lot faster with economies of scale.
My (tenuous) understanding is that the challenge with lidar isn't necessarily the cost of the sensor(s) but the bandwidth and compute required to meaningfully process the point cloud the sensors produce, at a rate/latency acceptable for driving. So the sensors themselves can be a few hundred bucks but what other parts of the system also need to be more expensive?
That seems very unlikely to me. Automotive applications are already doing things like depth reconstruction based on multiple camera angles and ML inference in real time. Why should processing a depth point cloud be significantly more difficult than those things?
The basis for my understanding is a convo with a Google engineer who was working on self-driving stuff around 10-15 years ago -- not sure exactly when, and things have probably changed since then.
At the time they used just a single roof-mounted lidar unit. I remember him saying the one they were using produced point cloud data on the order of Tbps, and they needed custom hardware to process it. So I guess the point cloud data isn't necessarily harder to process than video, but if the sensor's angular resolution and sample rate are high enough, it's just the volume of data that makes it challenging.
Maybe at that time 10-15 years later we have graphic cards doing actual ray tracing lidar computing is way less complex. Anyway the $200 I is for the whole system not just sensors so that would include signal processing
Makes sense. Maybe doing self driving well just requires a ridiculously high bandwidth regardless of data source. Related, the human visual system consumes a surprisingly large quantity of resources from metabolic to brain real estate.
This doesn’t seem to stop Teslas competition in self-driving cars from implementing it; and succeeding far more in safety and functionality while doing so.
Valuation of a statistical life is $5-10M, depending on who you ask[0].
So it’s too much to afford, or at least not singularly justifiable, unless more than 1 out of every 2000 cars kills someone in a way that would be prevented by LIDAR.
What a weird argument by Karpathy. He has a degree in physics. How does this dude not know that radar can see things not possible through camera vision. That argument there doesn't make any sense. That there's supply chains and things break and this makes it unsafe? Well that's true for every single bolt, every nut, ever little thing. I understand a drive to simplicity but you can't just throw fancy words in there like entropy while ignoring the literal physics that says camera + radar is less entropy than camera no radar. There is literally more (unique!) information available to you!
This isn't really something you'd ship in a car though. It's cool that we have such a rich ecosystem of devices that this can be made "off-the-shelf" - but for production use in a car? Not really practical.
Max range 12 meters. That's when it seems to start to get expensive. The light source, filters, and sensors all have to get better.
Good enough for most small robots. Maybe good enough for the minor sensors on self-driving cars, the ones that cover the vehicle perimeter so kids and dogs are reliably sensed. The big long-range LIDAR up top is still hard.
I'd like to know where this price jump really comes from. Google doesn't help me. My first guess is that laser safety becomes an active control process at this point - laser scanner mirror needs to keep moving to not be able to deposit a damaging amount of energy onto a human retina. So you need a safety critical control system to constantly monitor mirror speed and and position and shut down the laser when it becomes too slow. How wrong am I?
More output power, larger optics, more sensitive detectors, more rejection of unwanted light, more pixels, larger rotating machinery, active stabilization... And the big units are low volume.
Here's a top of car LIDAR you can buy for about US$27,000.[1] 128 pixels high sensor, spinning. This is roughly comparable to Waymo's sensor.
Does the interval you're measuring move around much?
Can the measurement system touch or be affixed to it?
Sounds like a pair of nice calipers might work. So depending on your precision needs, you might get away with the same approach: sliding grid of capacitive cells that slide over the measurement cells. Microcontroller measures them as it slides through. Atan2() for the final result. The meter only part of this is called a DRO(Digital ReadOut)
Thanks for sharing this video, I am also interested in this exact thing. However from my understanding with an approach like this you are limited by the size of the image sensor, meaning if my surface has a bump that is larger than the size of the image sensor it would not get measured. Any idea on how to make something like this work if the goal was to measure slightly larger topographical changes at a less granular resolution like in the 100mm range?
We had a similar issue at one point, and had to build something custom that cost way more than I'd like to admit. Thus, I would recommend just looking at DRO kits for CNC milling machines.
If your project is not budget constrained, than there are complete closed-loop stage solutions around:
The sketchfab examples are fantastic, to be able to move around in a 3D space, like it's some kind of scifi simulation.
The mouse controls are confusing the heck out of me. It shows a 'grab' icon but nothing about it grabs as the movement direction is the opposite, feels completely unnatural.
For home improvement projects, This could be quite useful for generating point cloud map of places hard to get to. Like I have drywall installations I would love to get behind and check how things look, this would be great for that.
GY-521 in particular and MPU6050 in general make quite poor IMUs. Why do you use them? And what for in this particular case?
What do they do in this set up?
I've been toying with photogrammetry a little bit lately, specifically for scanning indoor rooms and spaces. So far I'm finding metashape the most suitable for it, but some of the precision isn't great (but I'm still improving my technique). I mostly want to convert the interior of one real building into a digital model for preservation and analysis. I've briefly considered LIDAR, but put it in the too hard/expensive bucket. This project seems to challenge that assumption.
What does the software post-processing look like for this? Can I get a point cloud that I can then merge with other data (like DSLR photographs for texturing)?
I see in their second image[1] some of the wall is not scanned as it was blocked by a hanging lamp, and possibly the LIDAR could not see over the top of the couch either. Can I merge two (or more) point clouds to see around objects and corners? Will software be able to self-align common walls/points to identify its in the same physical room, or will that require some jiggery-pokery? Is there a LIDAR equivalent of coded targets or ARTags[0]? Would this scale to multiple rooms?
Is this even worth considering, or will it be more hassle than its worth compared to well-done photogrammetry?
(Apologies for the peak-of-mount-stupid questions, I don't know what I don't know)
Hi! Thanks for sharing this amazing work. I’m curious about the scalability and performance of PiLiDAR when deployed on large-scale outdoor datasets. Have you benchmarked it on datasets like SemanticKITTI or nuScenes? If so, could you share any insights on runtime, memory usage, and how well it generalizes beyond the indoor scenes used in your paper?
I think you (or me, please correct me if that's the case) misunderstood something here - this is a diy lidar scanner for data acquisition - these datasets are mostly created using rgba cameras and the point clouds are later created with some post processing step.
So it's not a model for processing data but rather a hardware hack for having a real lidar - as in real depth data.
Oh hey! This is exactly what I was looking for just a couple weeks ago! I've had parts to prototype something roughly equivalent to this sitting in my cart on Amazon for a couple weeks now, but I've been very uncertain on my choice of actual lidar scanner.
I'll have to look into this as a starting point I get back from Easter vacation
It's not obvious what the heck this is without reading into it. A full 4pi steradian scanner? a 360 degree 1 channel LIDAR? A fisheye camera plus some single channel LIDAR plus monocular depth estimation networks to cover everything not in the plane of the lidar?
It would be great to clarify what it is in the first sentence.
I believe it's a 360deg planar lidar mounted on a vertical plane, with a motor to rotate it around and slowly cover a full 4pi sphere. There's also a fisheye camera integrated in. This is a pretty common setup for scanning stationary spaces (usually tripod mounted)
It's impressive that the cost of usable LIDAR tech is well within the reach of personal projects now. The sensors used on the first self-driving cars (from companies like SICK, etc.) likely perform much better but the price point of multiple k$ is not really viable for experimentation at home.
Not to make everything political, but I wonder how the US tariffs will affect electronics-adjacent hobbies. Anecdotally, the flashlight community on Reddit has been panicking a little about this.
Never mind hobbyists - I work in electronics R&D and my two favorite suppliers are US based even though I am not. Anxious to see how this plays out and that's not even considering our production departments.
I'm sure most electronic hobby projects are going to be financially out of reach for many people for awhile at least. Many people who run businesses that are running small homebrew projects are struggling, too [1]. But it can be extremely hard to tell what might happen with a POTUS who seems to change his mind on what tariffs should be implemented on a whim with zero thought process put into it, no prior notice when they're going to be implemented or removed and then implemented again times 500% or whatever.
I know the Hong Kong post also recently blocked outbound packages entirely sent to the US [2], so I don't know how that's impacting shipments of tech like this & etc byt would be curious to know.
> Not to make everything political... [proceeds to make a political statement]
For what it's worth, this type of Lidar scanner was possible to make well over a decade ago with ROS1, a Phidgets IMU, a webcam, and a lidar pulled out of a Neato vacuum (the cheapest option at the time). This would be around the difficulty of a course project for an undergraduate robotics class and could be done with less than 200 USD of salvaged parts (not including the computer). Hugin was also around over a decade ago.
I would not consider asking a question about the impact of current events on a market segment relevant to the discussion topic to be political. The disclaimer is presumably to encourage respondents not to drag things in an off topic direction. Ironic, considering the outcome.
This seems to be using classic formula -> get trivial, ready made component, design 3D printed enclosure and hook it up to Raspberry Pi. Instant Hacker News homepage.
Not to make everything political... [proceeds to make a political statement]
Being all polite and non-political and shit is what brought us to this pass.
Never lose an opportunity to make the people who voted for the current state of affairs feel isolated, rejected, guilty, and generally bad. Being nice to them doesn't work.
Please, I don't want to come on to HN to see politics injected into everything. Stay on reddit for that.
I logged in to make a comment regarding something within my area of expertise: the technology present in the parent link and how this technology has been accessible to hobbyists for over 10 years.
>I don't want to come on to HN to see politics injected into everything
If it's political to wonder how tariffs impact the cost of the project we're discussing, then everything is political, and it's pointless to complain about politics being "injected into everything."
>You’re feeding into the confirmation bias I already have about how the opposition thinks
It's wild that you acknowledge your cognitive bias and then blame others for it instead of working on it. If I wrote something like that, I hope I would have the wherewithal to notice that something is seriously wrong with my thinking.
Yes the opposition thinks evil is evil. The opposition also thinks water is wet. Check back here tomorrow for more obvious things rational people think.
The opposition reductively believes this is an existential battle between “good and evil”, they’re the “good”, and that’s a position from which one can justify almost anything to eradicate “evil”.
You can always know, if you want to, by actually engaging in constructive dialog. Which probably isn’t going to happen in this thread because it’s ostensibly about a raspberry pi LiDAR scanner, and thus neither really the time nor place.
The MAGA crowd is not even remotely interested in 'constructive dialog' and is so far down the hole of drinking the kool-aide, constructive dialog with them will likely never be possible.
You cannot have constructive dialog about astronomy with someone who thinks the sky is made of green and purple polkadots because that's what someone told them, and dismiss all evidence to the contrary as a massive conspiracy.
They don't even believe in democracy or constitutional rights - at least, for anyone but them.
It's true, a Hokuyo or a Sick that sold for several thousands a decade ago is laughably bad compared to something under $100 from Shenzhen these days. When there's a need there's a way, I guess.
I hope they decide to develop some disruptive stereo/structured light/tof cameras eventually too, those are still mostly overpriced and kinda crap overall.
Short term there's some suffering but while hobbyists are definitely more price sensitive, they are also the most flexible ones. In production you don't just need one piece, you need a steady supply and any change of components affects the whole product.
How China/US interact will determine the longer term future of that economic relationship but many companies are already adjusting because he future is currently uncertain. With the free trade agreement with the EU and more producers moving to the US I think that it's been a good disruption even if I'm now also scrambling to find alternative PCB manufacturers.
How many will follow through with these announcements? During Trump's first term, announcing huge projects in the US and then not following through was a common tactic for companies dealing with Trump. Foxconn, for example, announced a new $10 billion factory in Wisconsin. They made some initial investments and stopped when people stopped paying attention. Instead of the promised 13.000, they now employ about 1.000 people there.
And what about all the companies that will have gone out of business by then? This mainly affects small companies, which are exactly the companies you need for a healthy economy. In some cases, they have shipments already paid for that they can't accept because they don't have the liquid assets to pay the unexpected tariffs, so these companies are now at risk of going out of business completely unnecessarily.
It never makes sense to use tariffs for economic reasons. It just does not work. Tariffs can make sense for strategic reasons if you're willing to take an economic hit to lower dependence on other countries for critical industries or technologies. However, the idea that taxes are ever "a good disruption" for the economy does not bear out.
>It never makes sense to use tariffs for economic reasons. It just does not work.
This week two USA companies from which I bought some products from Europe sent me an email explaininig how they have to rise their prices due to tariffs, as they need to import from China for now.
Guess who will be faster: these companies finding an alternative supplier in the US that match China quality-price, or I finding an alternative supplier from China? They just admited that they are buying from China anyways.
This is really cool
One thing I'd suggest, for any hardware product, is that when doing your bill of materials to provide links and show estimated costs. Sure, these will change but having a rough idea of the costs is really helpful, especially when perusing on from things like HN. It can be a big difference for someone to decide if they want to try it on their own or not. It is the ballpark figures that matter, not the specifics.
You did all that research, write it down. If for no one but yourself! Providing links is highly helpful because names can be funky and helps people (including your future self) know if this is the same thing or not. It's always noisy, but these things reduce noise. Importantly, they take no time while you're doing the project (you literally bought the parts, so you have the link and the price). It saves yourself a lot of hassle, not just for others. Document because no one remembers anything after a few days or weeks. It takes 10 seconds to write it down and 30 minutes to do the thing all over again, so be lazy and document. I think this is one of the biggest lessons I learned when I started as an engineer. You save yourself so much time. You just got to fight that dumb part in your head that is trying to convince you that it doesn't save time. (Same with documenting code[0])
Here. I did a quick "15 minute" look. May not be accurate
That gives us $200-$280 before counting the power supply and buck converter.[0] When I wrote the code only me and god understood what was going on. But as time marched on, now only god knows.
Learning projects like this is about to get a lot less accessible due to the extreme tariffs and elimination of the de minimis exemption. Take that BOM and multiply it by 2X or 3X depending on the source and how many different shipments arrived.
I can’t tell you how depressing it is to go from having access to cheap learning materials for introducing kids (and adults) to electronics, and now it’s being taxed away in the name of improving the US competitiveness or something. Total footgun.
Doesn't change the fact that the advice is still beneficial. At worst you still have a good history of the effect of these tariffs.
I'd call the tariffs the second death of hardware though. The first was when we killed all the parts stores. That was a slower death, coupled with the loss of right to repair. But we've been making big strides in that domain, so I hope we can undo that death. If we can also undo the dumb tariffs too then ironically we might have a chance to bring back hardware which somewhat seems inline with what that party (claims to/pretends to) wants.
> we killed all the parts stores
We ran out of people buying from parts stores - hobby electronics became less popular.
Eras of hobbies:
A new Radio Shack dealer just opened in a rural area, with an emphasis on radio (it's near an off-roading park), and your comment reminded me what terrible timing this is for them. Such a cruel twist.
https://tekshack.com
For Americans, that is.
Come to Australia
The good thing it’s on GitHub so you can submit a pull request for a BOM to help the person out.
This is the most ungrateful comment I've read today, harping away about how 'it should have been done'.
Well you fucking do it then.
I know that my time is so short (because I have a family) that if I can even do a project then I'm almost certainly not going to document it because getting it done will be enough of a stretch for me, and if I need to come back and re-do it again, I am probably not going to even bother. Not all of us live in mom's basement and have the luxury of extra time.
It was not ungrateful.
It was a general suggestion for everyone doing hardware projects and OP did a lookup and provided the additional info / links, which sparked further discussions.
Chill.
He did 'do it', and saved us all the 10-15minutes it took.
Incredible that this is too expense for a company like Tesla.
What is the HN opinion on Tesla skipping lidar? Having spent some time with computer vision in university I think it's insane to skip it - sure stereo reconstruction is powerful but lighting conditions have such an impact on results that having some robust depth data feels like a no-brainer and skipping it feels like malignant neglect.
As someone who's done a lot of computer vision, it is insane to skip it. And it's sad because what everyone missed from that viral Mark Rober video [0] was not the Looney Toons wall hit but the fucking kid in the smoke. Add all the cameras and AI you want, you ain't changing the laws of physics: visible light doesn't penetrate smoke. But radar does. Every (traditional) engineer knows that safe systems have redundancy. That safe systems have redundancy through differing modalities. Use cameras, but also use radar, lidar, and even millimeter wave. Using just cameras isn't just tying one hand behind your back, it's shooting yourself in the kneecap afterwards
[0] https://www.youtube.com/watch?v=IQJL3htsDyQ
The argument is that humans manage the task without lidar, and automation doesn't have to be perfect it just has to be better than humans, to be a net positive. It seems to me, you might as well use lidar if it's cheap enough, but the argument that computer systems can outcompete human drivers, without using lidar, is at least reasonable, although not yet proven.
Extending this line of thought I wonder why tesla didn't make cars on two legs and insisted on using wheels?
(Just wanted to make sure - this is not a stab at you, I'm well aware that the original argument is from tesla)
We could extend the argument more. Why build a self driving vehicle at all? Build a humanoid robot to drive the car for you! The argument that computer systems can outcompete human drivers, without using lidar, is at least reasonable, although not yet proven
(I didn't just want to just make sure - this is a stab)
It's a dumb argument on multiple accounts. While it's a routine argument in software engineering it's an argument that often will get people fired and sued in testimonial.
First off, humans don't do it "just by vision". Sure, we don't have lidar but we have hearing, we have touch, we have tons of experience. We can create world models for Christ's sake and that means modeling physics. I'm sure you've seen papers that claim world models but I'm a ML researcher who also has a physics degree and I'm not afraid to tell you that's bullshit. It's as honest as Altman calling GPT PhD level intelligence. A PhD has very little to do with the ability to recall information.
Second off, it doesn't matter much how humans do it. It matters how the car can. Why limit yourself. There's tons of cars with radar and lidar. They're not more expensive and they can see an object in fog or poor light conditions. It can do something humans can't do! Why in the world would you decide not to do that. You can make an argument about price but that argument changes when that thing becomes cheaper. When that happens you're now just someone adding danger for no reason. You can't argue that only cameras will be safer. It categorically isn't. The physics is in your way.
But that is the argument made when Tesla first said they were going to use only cameras. Because everyone knew lidar would come down with scale and that's why many other manufacturers went in that direction. Which is mutually beneficial, so Tesla would benefit from joining.
Third, be careful with those claims. I'm more willing to believe 3rd party reports like from NHSTA than directly from Tesla [0][0] https://www.forbes.com/sites/stevebanker/2025/02/11/tesla-ag...
Is Tesla getting into legal mess if they need to add sensors to make self driving work when they already sold that feature to car owners? Would that imply that they need to retrofit already sold cars with upgraded sensor packages?
Yes, and this is already turning out to be a problem for them. They've acknowledged that HW3 is not sufficient, and will be on the hook for those who bought the FSD package with those cars.
That isn't the end of the world, but it'd turn into a much bigger problem if they also had to add additional sensors and body modifications to support those sensors.
My understanding is that they went the opposite direction - their cars used to have lidar, but don’t anymore.
Worse, they turned them off for the older vehicles with a software update.
They never had lidar. They had a very low resolution radar that was used for AP, and some pretty terrible ultrasonic sensors with massive blind spots.
> What is the HN opinion on Tesla skipping lidar?
Short-sighted and egotistical.
There likely have been deaths and injuries that would have been prevented by lidar, and there will likely be more in the future.
> the HN opinion
I'm not sure why you'd think HN has a monolithic opinion, this is a site with myriad different users.
Maybe they're more asking for the whole breadth of opinions available from the HN community?
interesting claim i read in another thread a couple weeks ago:
>Tesla Vision is, currently, legally below minimum human vision requirements and has historically been sold despite being nearly legally blind.
https://news.ycombinator.com/item?id=43605034
There will be time in the very near future (read five years time) people will not buy vehicle (car, bike, etc) without lidar as the price become insignificant as car reverse camera, and it become commonplace.
Personally now I'll not buy any vehicle without assisted camera parking and apparently many people will agree with this important feature including Marques Brownlee [1].
[1] Reviewing my First Car: Toyota Camry Hybrid! [video]:
https://youtu.be/Az6nemkRB1Y
Radar technology offers a range of applications, including the ability to detect objects around corners, behind obstacles such as brick walls, and even penetrate human bodies at specific frequencies. However, when multiple sensors yield similar results, it becomes challenging and costly to discriminate which results are valid.
Operating radar at a specific frequency, such as 2.45 GHz (a microwave frequency often used due to its affordability), can be ineffective in environments rich in water droplets (e.g., rain), as these can dominate the radar signals. Higher frequencies enable the detection of smaller water droplets, but switching between frequencies can be expensive. Additionally, varying the radar's detection range to identify objects of different sizes complicates the calculations, involving factors such as minimum and maximum range, power, and time on target.
Cameras typically detect non-moving objects by comparing successive images. In contrast, radar can identify both stationary and moving objects and determine their direction relative to the sensor by emitting a frequency and analyzing the reflected pulses. Lidar, on the other hand, uses light to measure the distance to objects in its path, employing a photoreceptor to capture the reflected light.
I can only speak for myself, but I work on this stuff in this industry: Tesla’s choice is asinine at this point. It’s one thing to claim cameras only and find that won’t work and pivot, but they are so dug in that they can’t admit they were wrong and won’t do so. So it’s asinine now.
>It’s one thing to claim cameras only and find that won’t work and pivot, but they are so dug in that they can’t admit they were wrong and won’t do so. So it’s asinine now.
Didn't they start with lidar or radar or similar and then go back to only using vision based technologies?
What do yes call a person that has only visual sensory? Disabled.
Humans have sight, touch, taste, sound, smell, and vascular sensory. Only a portion of systems used in self drive automation.
Well I haven't used taste nor smell in my driving yet. Touch only as far as vibration and steering wheel torque (both not difficult to sense with electronics).
That narrows it down a bit.
A similar situation to Jobs reciting research on how optimal the one button mouse is. A thought bubble.
Why, we will perhaps never know. But likely they were early and it was deemed too expensive back then, or didn't find a supplier they could work with. Now there's too much prestige in it and they can never back down which would be admitting to a mistake.
It would be one thing if it was a one time event but then they repeated that playbook with the lack of a rain sensor.
I think it was a valid decision that turned out to be incorrect and is staying put as a result of stubbornness. People really like criticizing decisions in hindsight especially here where the armchair engineer with the benefit of hindsight is too common.
People have been criticizing this decision from the get go. It may have upgraded from engineers to the general public but let's be honest, the latter doesn't matter in a topic like this anyways
It was a valid experiment and pushed computer vision but it clearly failed a long time ago. The fact that Teslas are not only still sold without lidar but “autopilot” is pushed as safe is disgusting.
People will die (and have already died) horrifically because of this decision. It’s morally bankrupt.
I assume the same would apply to any car not using LIDAR? Or just Tesla because they decided on a different tech stack?
This isn't too expensive for Tesla, it's just nowhere near the level needed for an AV. Automotive lidars are 10-20 scans/second, rated for dust/rain/etc, and need a range of at least 50 meters, but 100-200 is more ideal. Not a fan of Tesla's approach, but I wanted to clarify that it's not like they can just use a lidar like this and call it a day. The specs are completely different and that really drives up cost!
The requirements for electronics in a car are pretty extreme (temp, durability), not that I disagree, but it's not apples to oranges.
> requirements for electronics in a car are pretty extreme
+ the salaries of everyone working on that stuff, not just assembly but also writing the code to support it
Not that I disagree, either: at the volumes that a modest car company puts out, I'd assume it's easily worth the, say, 3% cost premium on the car's total price to have something that can actually see things you don't see and thus makes a safer system. It might even reduce costs by having lower requirements for the vision hardware and software, but that's not something I can know. There's a lot of unknowns here that I think mean we can't really do a good comparison indeed
That hasn’t stopped Tesla before. They have a track record of treating automotive-grade quality standards as optional when doing electronics sourcing[1].
As the article notes, Tesla conveniently “fixed” their thermals and durability issue that caused by inventing a feature called cabin overheat protection and marketing it as for people/animals overheating and not for the non-automotive-spec electronics in the cabin.
If you can’t bring auto quality electronics to the car, just change the car so it avoids standard auto thermal conditions ¯\_(ツ)_/¯
https://www.thedrive.com/tech/27989/teslas-screen-saga-shows...
Don’t they constantly get tested as the safest car in the world? I saw it years ago in some American news and the first google result is from New Zealand last year https://www.drivencarguide.co.nz/news/tesla-model-y-is-the-s...
They wanted to sell “self driving ready” packages 10 years ago, when LiDAR actually was expensive. So at the time, they had to make big deal about LiDAR being unnecessary.
But now it has come down in price reportedly by more than a factor of ten so at some point a logical person would revisit that decision
it's not only about logic; his ego is now involved in it which virtually guarantees it will never be revisited.
Would this not also be the case had Tesla embraced the tech and installed thousands into their cars?
Do we really need LiDAR in a Tesla? I own a Chevy Trax and it has LKAS and ADAS. Not even using LiDAR just sensor fusion with camera and radar. It’s a cheap car too. It’s car assisted driving.
I have driven a Tesla once but not with the added feature.
The lidars used on self-driving vehicles are far more capable and far more expensive.
Not by that much current generation hardware for cars is $500-700. And some of the oem expect to bring it price down below $200 with the next generation equipment. Now that BYD put self driving in almost every car it will supercharge adoption and lidar prices might drop even a lot faster with economies of scale.
My (tenuous) understanding is that the challenge with lidar isn't necessarily the cost of the sensor(s) but the bandwidth and compute required to meaningfully process the point cloud the sensors produce, at a rate/latency acceptable for driving. So the sensors themselves can be a few hundred bucks but what other parts of the system also need to be more expensive?
That seems very unlikely to me. Automotive applications are already doing things like depth reconstruction based on multiple camera angles and ML inference in real time. Why should processing a depth point cloud be significantly more difficult than those things?
The basis for my understanding is a convo with a Google engineer who was working on self-driving stuff around 10-15 years ago -- not sure exactly when, and things have probably changed since then.
At the time they used just a single roof-mounted lidar unit. I remember him saying the one they were using produced point cloud data on the order of Tbps, and they needed custom hardware to process it. So I guess the point cloud data isn't necessarily harder to process than video, but if the sensor's angular resolution and sample rate are high enough, it's just the volume of data that makes it challenging.
Maybe at that time 10-15 years later we have graphic cards doing actual ray tracing lidar computing is way less complex. Anyway the $200 I is for the whole system not just sensors so that would include signal processing
Makes sense. Maybe doing self driving well just requires a ridiculously high bandwidth regardless of data source. Related, the human visual system consumes a surprisingly large quantity of resources from metabolic to brain real estate.
The whole point of lidar is to massively increase the amount of ranged data you have to work with.
This doesn’t seem to stop Teslas competition in self-driving cars from implementing it; and succeeding far more in safety and functionality while doing so.
What is the cost of a human life worth?
edit: seriously, a $4,000 sensor and an extra, say, $3,000 for an upgraded computer module so your car can drive itself is just too much too afford?
Valuation of a statistical life is $5-10M, depending on who you ask[0].
So it’s too much to afford, or at least not singularly justifiable, unless more than 1 out of every 2000 cars kills someone in a way that would be prevented by LIDAR.
0: https://www.sciencedirect.com/science/article/pii/S109830152...
At this point having "something" would probably even beat having nothing.
I guess it's simply a big numbers thing. If you sell lots of cars, shaving a couple of hundred dollars of each car adds up.
Karpathy addressed this question at the time:
https://news.ycombinator.com/item?id=33397093
Of course he was working for Tesla back then. His opinions might be different today given that Elon is no longer signing his paycheck.
What a weird argument by Karpathy. He has a degree in physics. How does this dude not know that radar can see things not possible through camera vision. That argument there doesn't make any sense. That there's supply chains and things break and this makes it unsafe? Well that's true for every single bolt, every nut, ever little thing. I understand a drive to simplicity but you can't just throw fancy words in there like entropy while ignoring the literal physics that says camera + radar is less entropy than camera no radar. There is literally more (unique!) information available to you!
His opinions aren’t much different in interviews I’ve heard since, although of course that doesn’t mean he’s completely unbiased now.
Money better spent on marketing. Like that song about "him having a plan".
After all car sales don't drive the stock market. Public opinion does.
I’ll bet a lot of Tesla investors are wishing neither of those applied these days.
This isn't really something you'd ship in a car though. It's cool that we have such a rich ecosystem of devices that this can be made "off-the-shelf" - but for production use in a car? Not really practical.
The actual scanners: [1]
Max range 12 meters. That's when it seems to start to get expensive. The light source, filters, and sensors all have to get better.
Good enough for most small robots. Maybe good enough for the minor sensors on self-driving cars, the ones that cover the vehicle perimeter so kids and dogs are reliably sensed. The big long-range LIDAR up top is still hard.
[1] https://www.ldrobot.com/
I'd like to know where this price jump really comes from. Google doesn't help me. My first guess is that laser safety becomes an active control process at this point - laser scanner mirror needs to keep moving to not be able to deposit a damaging amount of energy onto a human retina. So you need a safety critical control system to constantly monitor mirror speed and and position and shut down the laser when it becomes too slow. How wrong am I?
More output power, larger optics, more sensitive detectors, more rejection of unwanted light, more pixels, larger rotating machinery, active stabilization... And the big units are low volume.
Here's a top of car LIDAR you can buy for about US$27,000.[1] 128 pixels high sensor, spinning. This is roughly comparable to Waymo's sensor.
[1] https://www.hesaitech.com/product/ot128/
Somewhat related. I'm looking for a cheap way to measure distances to approx 10 microns accuracy, over distances on the order of 300mm. Any ideas?
Does the interval you're measuring move around much?
Can the measurement system touch or be affixed to it?
Sounds like a pair of nice calipers might work. So depending on your precision needs, you might get away with the same approach: sliding grid of capacitive cells that slide over the measurement cells. Microcontroller measures them as it slides through. Atan2() for the final result. The meter only part of this is called a DRO(Digital ReadOut)
I have some design ideas for a diy system, how much money/time are you willing to spend for experimentation?
What counts as cheap to you?
I'm thinking about automating something a long these lines:
https://youtu.be/hnHjrz_inQU?si=dNzXVBVFsr7e8m_6
Off the shelf lasers and camera sensors can be hacked around with DIY for some pretty unexpected precision.
Thanks for sharing this video, I am also interested in this exact thing. However from my understanding with an approach like this you are limited by the size of the image sensor, meaning if my surface has a bump that is larger than the size of the image sensor it would not get measured. Any idea on how to make something like this work if the goal was to measure slightly larger topographical changes at a less granular resolution like in the 100mm range?
We had a similar issue at one point, and had to build something custom that cost way more than I'd like to admit. Thus, I would recommend just looking at DRO kits for CNC milling machines.
If your project is not budget constrained, than there are complete closed-loop stage solutions around:
https://www.pi-usa.us/en/
https://xeryon.com
Best of luck, and prepare yourself for sticker shock... lol =3
OCT.
There are cheap OCT systems?
For what purpose?
https://xyproblem.info/
Answering their question would be more helpful here, even if it doesn't solve their problem.
Not OP but I'm in the same market, 3d printing and desktop CNC for me.
Assuming the XY problem based on nothing is pointless and counterproductive and only serves to make you feel smart.
The sketchfab examples are fantastic, to be able to move around in a 3D space, like it's some kind of scifi simulation.
The mouse controls are confusing the heck out of me. It shows a 'grab' icon but nothing about it grabs as the movement direction is the opposite, feels completely unnatural.
You could probably harvest these from robot vacuums on ebay/goodwill.
These = lidar sensors
For home improvement projects, This could be quite useful for generating point cloud map of places hard to get to. Like I have drywall installations I would love to get behind and check how things look, this would be great for that.
There's a lot of stuff that was better in the "good old days".
But to be alive when it's possible for gifted individuals to create technology like this is just incredible.
GY-521 in particular and MPU6050 in general make quite poor IMUs. Why do you use them? And what for in this particular case? What do they do in this set up?
Do you have other sensors in the same price range that you'd recommend instead for most uses? How much accuracy improvement would you expect?
I've been toying with photogrammetry a little bit lately, specifically for scanning indoor rooms and spaces. So far I'm finding metashape the most suitable for it, but some of the precision isn't great (but I'm still improving my technique). I mostly want to convert the interior of one real building into a digital model for preservation and analysis. I've briefly considered LIDAR, but put it in the too hard/expensive bucket. This project seems to challenge that assumption.
What does the software post-processing look like for this? Can I get a point cloud that I can then merge with other data (like DSLR photographs for texturing)?
I see in their second image[1] some of the wall is not scanned as it was blocked by a hanging lamp, and possibly the LIDAR could not see over the top of the couch either. Can I merge two (or more) point clouds to see around objects and corners? Will software be able to self-align common walls/points to identify its in the same physical room, or will that require some jiggery-pokery? Is there a LIDAR equivalent of coded targets or ARTags[0]? Would this scale to multiple rooms?
Is this even worth considering, or will it be more hassle than its worth compared to well-done photogrammetry?
(Apologies for the peak-of-mount-stupid questions, I don't know what I don't know)
0: https://en.wikipedia.org/wiki/ARTag 1: https://github.com/PiLiDAR/PiLiDAR/raw/main/images/interior....
Hi! Thanks for sharing this amazing work. I’m curious about the scalability and performance of PiLiDAR when deployed on large-scale outdoor datasets. Have you benchmarked it on datasets like SemanticKITTI or nuScenes? If so, could you share any insights on runtime, memory usage, and how well it generalizes beyond the indoor scenes used in your paper?
I think you (or me, please correct me if that's the case) misunderstood something here - this is a diy lidar scanner for data acquisition - these datasets are mostly created using rgba cameras and the point clouds are later created with some post processing step.
So it's not a model for processing data but rather a hardware hack for having a real lidar - as in real depth data.
You can throw anything you like on it.
Oh hey! This is exactly what I was looking for just a couple weeks ago! I've had parts to prototype something roughly equivalent to this sitting in my cart on Amazon for a couple weeks now, but I've been very uncertain on my choice of actual lidar scanner.
I'll have to look into this as a starting point I get back from Easter vacation
How do you make it so your LIDAR doesn't interfere with someone else's LIDAR?
How safe are these sorts of sensors for eyes?
Wow. Lidars have become so good. This is amazing. I had no idea
It's not obvious what the heck this is without reading into it. A full 4pi steradian scanner? a 360 degree 1 channel LIDAR? A fisheye camera plus some single channel LIDAR plus monocular depth estimation networks to cover everything not in the plane of the lidar?
It would be great to clarify what it is in the first sentence.
I believe it's a 360deg planar lidar mounted on a vertical plane, with a motor to rotate it around and slowly cover a full 4pi sphere. There's also a fisheye camera integrated in. This is a pretty common setup for scanning stationary spaces (usually tripod mounted)
its a fisheye camera plus single-channel LiDAR
It's impressive that the cost of usable LIDAR tech is well within the reach of personal projects now. The sensors used on the first self-driving cars (from companies like SICK, etc.) likely perform much better but the price point of multiple k$ is not really viable for experimentation at home.
Not to make everything political, but I wonder how the US tariffs will affect electronics-adjacent hobbies. Anecdotally, the flashlight community on Reddit has been panicking a little about this.
Never mind hobbyists - I work in electronics R&D and my two favorite suppliers are US based even though I am not. Anxious to see how this plays out and that's not even considering our production departments.
I'm sure most electronic hobby projects are going to be financially out of reach for many people for awhile at least. Many people who run businesses that are running small homebrew projects are struggling, too [1]. But it can be extremely hard to tell what might happen with a POTUS who seems to change his mind on what tariffs should be implemented on a whim with zero thought process put into it, no prior notice when they're going to be implemented or removed and then implemented again times 500% or whatever.
I know the Hong Kong post also recently blocked outbound packages entirely sent to the US [2], so I don't know how that's impacting shipments of tech like this & etc byt would be curious to know.
[1] Arduboy creator says his tiny Game Boy won’t survive Trump’s tariffs https://www.theverge.com/news/645555/arduboy-victim-trump-ta...
[2] Hong Kong suspends package postal service to the US after Trump’s tariff hikes https://www.cnn.com/2025/04/15/business/hong-kong-suspends-p...
> Not to make everything political... [proceeds to make a political statement]
For what it's worth, this type of Lidar scanner was possible to make well over a decade ago with ROS1, a Phidgets IMU, a webcam, and a lidar pulled out of a Neato vacuum (the cheapest option at the time). This would be around the difficulty of a course project for an undergraduate robotics class and could be done with less than 200 USD of salvaged parts (not including the computer). Hugin was also around over a decade ago.
It's still a nice little project!
I would not consider asking a question about the impact of current events on a market segment relevant to the discussion topic to be political. The disclaimer is presumably to encourage respondents not to drag things in an off topic direction. Ironic, considering the outcome.
This seems to be using classic formula -> get trivial, ready made component, design 3D printed enclosure and hook it up to Raspberry Pi. Instant Hacker News homepage.
Not to make everything political... [proceeds to make a political statement]
Being all polite and non-political and shit is what brought us to this pass.
Never lose an opportunity to make the people who voted for the current state of affairs feel isolated, rejected, guilty, and generally bad. Being nice to them doesn't work.
Please, I don't want to come on to HN to see politics injected into everything. Stay on reddit for that.
I logged in to make a comment regarding something within my area of expertise: the technology present in the parent link and how this technology has been accessible to hobbyists for over 10 years.
>I don't want to come on to HN to see politics injected into everything
If it's political to wonder how tariffs impact the cost of the project we're discussing, then everything is political, and it's pointless to complain about politics being "injected into everything."
lobsters might your place if you would like to insulate yourself to that degree
You’re not making me feel isolated, rejected, guilty, or generally bad.
You’re feeding into the confirmation bias I already have about how the opposition thinks, which only serves to affirm the choice I made.
>You’re feeding into the confirmation bias I already have about how the opposition thinks
It's wild that you acknowledge your cognitive bias and then blame others for it instead of working on it. If I wrote something like that, I hope I would have the wherewithal to notice that something is seriously wrong with my thinking.
We all exhibit cognitive bias.
I’m illustrating how the original behavior feeds confirmation bias instead of establishing a basis for constructive dialog.
Yes the opposition thinks evil is evil. The opposition also thinks water is wet. Check back here tomorrow for more obvious things rational people think.
The opposition reductively believes this is an existential battle between “good and evil”, they’re the “good”, and that’s a position from which one can justify almost anything to eradicate “evil”.
How many Supreme Court rulings does it take for a Trump supporter to admit the Trump administration is unjust? The world may never know.
You can always know, if you want to, by actually engaging in constructive dialog. Which probably isn’t going to happen in this thread because it’s ostensibly about a raspberry pi LiDAR scanner, and thus neither really the time nor place.
The MAGA crowd is not even remotely interested in 'constructive dialog' and is so far down the hole of drinking the kool-aide, constructive dialog with them will likely never be possible.
You cannot have constructive dialog about astronomy with someone who thinks the sky is made of green and purple polkadots because that's what someone told them, and dismiss all evidence to the contrary as a massive conspiracy.
They don't even believe in democracy or constitutional rights - at least, for anyone but them.
I’m interested in constructive dialog, and I believe in democracy and constitutional rights. However, this is a thread about a neat LiDAR scanner.
It's funny - first you call me reductive but now it's all "I'm staying out of this one". Interesting how that goes.
It's true, a Hokuyo or a Sick that sold for several thousands a decade ago is laughably bad compared to something under $100 from Shenzhen these days. When there's a need there's a way, I guess.
I hope they decide to develop some disruptive stereo/structured light/tof cameras eventually too, those are still mostly overpriced and kinda crap overall.
Short term there's some suffering but while hobbyists are definitely more price sensitive, they are also the most flexible ones. In production you don't just need one piece, you need a steady supply and any change of components affects the whole product.
How China/US interact will determine the longer term future of that economic relationship but many companies are already adjusting because he future is currently uncertain. With the free trade agreement with the EU and more producers moving to the US I think that it's been a good disruption even if I'm now also scrambling to find alternative PCB manufacturers.
>With the free trade agreement with the EU
There is no such agreement.
>more producers moving to the US
How many will follow through with these announcements? During Trump's first term, announcing huge projects in the US and then not following through was a common tactic for companies dealing with Trump. Foxconn, for example, announced a new $10 billion factory in Wisconsin. They made some initial investments and stopped when people stopped paying attention. Instead of the promised 13.000, they now employ about 1.000 people there.
And what about all the companies that will have gone out of business by then? This mainly affects small companies, which are exactly the companies you need for a healthy economy. In some cases, they have shipments already paid for that they can't accept because they don't have the liquid assets to pay the unexpected tariffs, so these companies are now at risk of going out of business completely unnecessarily.
It never makes sense to use tariffs for economic reasons. It just does not work. Tariffs can make sense for strategic reasons if you're willing to take an economic hit to lower dependence on other countries for critical industries or technologies. However, the idea that taxes are ever "a good disruption" for the economy does not bear out.
>It never makes sense to use tariffs for economic reasons. It just does not work.
This week two USA companies from which I bought some products from Europe sent me an email explaininig how they have to rise their prices due to tariffs, as they need to import from China for now.
Guess who will be faster: these companies finding an alternative supplier in the US that match China quality-price, or I finding an alternative supplier from China? They just admited that they are buying from China anyways.