Entertainment

In the release notes of the latest Tesla FSD Beta v11, Tesla explains what is happening to Autopilot with the new update, and it adds the capacity to send voice feedback.

Tesla FSD Beta v11 is both an exciting and scary step as it is supposed to merge Tesla’s FSD and Autopilot highway stacks.

FSD Beta enables Tesla vehicles to drive autonomously to a destination entered in the car’s navigation system, but the driver needs to remain vigilant and ready to take control at all times.

Since the responsibility rests with the driver and not Tesla’s system, it is still considered a level-two driver-assist system, despite its name. It has been sort of a “two steps forward, one step back” type of program, as some updates have seen regressions in terms of driving capabilities.

Tesla has frequently been releasing new software updates to the FSD Beta program and adding more owners to it.

Since the wider release of the beta last year, there are currently over 400,000 Tesla owners in the program in North America – virtually every Tesla owner who bought the FSD package on their vehicles.

However, the bulk of these owners have yet to receive significant FSD beta updates as Tesla was supposed to release v11 to the fleet in November 2022, but the update has been stuck in testing within Tesla’s closed fleet since then.

The update is an important step because it includes many new neural networks, as Elon Musk stated, but from a consumer perspective, it’s also important because it is expected to merge Tesla’s FSD Beta software stack primarily used on roads and city streets with Tesla’s Autopilot software stack, which is used as a level 2 driver assist system on highways.

It has been delayed several times, but recently, Musk confirmed that a new version (v11.3) is going to a closed beta fleet this week – indicating that it might finally be about to be more widely released.

Now NotaTeslaapp, which tracks Tesla software updates, has obtained the FSD Beta v11.3 release notes, and they contain some interesting information.

Tesla starts out by explaining in more detail what it going to happen to Autopilot with this update:

Enabled FSD Beta on highway. This unifies the vision and planning stack on and off-highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was setup to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control and better decision making.

As expected this leaves the door open for some regression at first, but Tesla makes it clear that it believes this is the way to go long-term.

Another interesting new feature revealed by the release notes is the capacity to send Tesla voice memos about your FSD Beta experience. That’s something that Beta testers have been asking for a while as they can use it to give Tesla more details about a specific situation that they experience with the system.

A big part of the rest of the notes appears to focus on curbing some potentially dangerous driving behavior that FSD Beta has been known to do and has recently been described by NHTSA in its FSD Beta recall notice.

As we noted in our reporting of the recall, the notice made it sound like Tesla’s “fix” for the “recall” was simply its usual next software update, but now it looks like they did try to address some of these things more specifically as described in the release notes.

Here are the full Tesla FSD Beta v11.3 release notes:

  • Enabled FSD Beta on highway. This unifies the vision and planning stack on and off-highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was set up to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control and better decision-making.
  • Added voice drive-notes. After an intervention, you can now send Tesla an anonymous voice message describing your experience to help improve Autopilot.
  • Expanded Automatic Emergency Braking (AEB) to handle vehicles that cross ego’s path. This includes cases where other vehicles run their red light or turn across ego’s path, stealing the right-of-way. Replay of previous collisions of this type suggests that 49% of the events would be mitigated by the new behavior. This improvement is now active in both manual driving and autopilot operation.
  • Improved autopilot reaction time to red light runners and stop sign runners by 500ms, by increased reliance on object’s instantaneous kinematics along with trajectory estimates.
  • Added a long-range highway lanes network to enable earlier response to blocked lanes and high curvature.Reduced goal pose prediction error for candidate trajectory neural network by 40% and reduced runtime by 3X. This was achieved by improving the dataset using heavier and more robust offline optimization, increasing the size of this improved dataset by 4X, and implementing a better architecture and feature space.
  • Improved occupancy network detections by oversampling on 180K challenging videos including rain reflections, road debris, and high curvature.
  • Improved recall for close-by cut-in cases by 20% by adding 40k autolabeled fleet clips of this scenario to the dataset. Also improved handling of cut-in cases by improved modeling of their motion into ego’s lane, leveraging the same for smoother lateral and longitudinal control for cut-in objects.
  • Added “lane guidance module and perceptual loss to the Road Edges and Lines network, improving the absolute recall of lines by 6% and the absolute recall of road edges by 7%.
  • Improved overall geometry and stability of lane predictions by updating the “lane guidance” module representation with information relevant to predicting crossing and oncoming lanes.
  • Improved handling through high speed and high curvature scenarios by offsetting towards inner lane lines.
  • Improved lane changes, including: earlier detection and handling for simultaneous lane changes, better gap selection when approaching deadlines, better integration between speed-based and nav-based lane change decisions and more differentiation between the FSD driving profiles with respect to speed lane changes.
  • Improved longitudinal control response smoothness when following lead vehicles by better modeling the possible effect of lead vehicles’ brake lights on their future speed profiles.
  • Improved detection of rare objects by 18% and reduced the depth error to large trucks by 9%, primarily from migrating to more densely supervised autolabeled datasets.
  • Improved semantic detections for school busses by 12% and vehicles transitioning from stationary-to-driving by 15%. This was achieved by improving dataset label accuracy and increasing dataset size by 5%.
  • Improved decision-making at crosswalks by leveraging neural network-based ego trajectory estimation in place of approximated kinematic models.
  • Improved reliability and smoothness of merge control, by deprecating legacy merge region tasks in favor of merge topologies derived from vector lanes.
  • Unlocked longer fleet telemetry clips (by up to 26%) by balancing compressed IPC buffers and optimized write scheduling across twin SOCs.

Articles You May Like

Toyota to build EV factory in China: report
The King’s Christmas message in full
Hyundai IONIQ 5 set a new EV World Record after taking on the highest driveable road
Emergency incident on plane after hard landing
Denmark to boost defence spending for Greenland after Trump repeats call for US control