Contents

Fitting librealsense2 to Apple Silicon Macbook

Why

My data collection setup of my trajectory prediction project requires two realsense cameras running in the same time (D400 and T265) to populate channels in ROS1 Noetic. The old Thinkpad is running extremely slow, causing the realsense to restart when librealsense fails to respond, causing jumps in my data collected.

Earlier I got an M2 Pro Macbook pro, and I’ve been thinking about running the experiments with Mac for a long time, but never got the courage to do it. This week I finally decided that I can’t stand the slowness of that old Thinkpad! (Also because I’m upgrading my setup from D435 to D455)

Upon searching around in internet, there is absolutely zero tutorial/blog online try to go down this route. And after 40+ hours of struggle, I made it! The process is definitely non-trivial, so I decided to share my experience in the hope that it might help someone else. (Given the popularity of Realsense)

Facts

  1. Intel officially dropped support for T265 at librealsense version == 2.50
  2. The last version that T265 can work with is 2.53.1 (I recommand)
  3. D455, released in 2021, only works with firmware > 2.50.0
  4. official arm64 support is added in 2.54 (sudo apt install …)
  5. Don’t even think of emulating x86 os…. (hint: extremely slow)
  6. ROS1 realsense wrapper is terminated, last version 2.3.2 (officially support up to librealsense 2.50)
  • Useful link to check firmware compatibility:

bookmark

🧠 Simply put, if you, for any reason, don’t want to use ROS2 with apple silicon, this blog post is the only way to go.

Process

  1. You need to install arm64 (you will only find server version available) Ubuntu 20.04 virtual machine (don’t worry, Rosetta 2 is fast, meaning you will only lose ~10% performance). Refer to my other post for more detail. Download here:

bookmark

I recommend Parallels desktop, UTM works too, but Parallels is worth the extra coins.

  1. Install desktop env:

    1
    2
    3
    
    sudo apt update && sudo apt upgrade
    sudo apt install ubuntu-desktop
    sudo reboot
    
  2. Download librealsense2 == 2.53.1 source zip from official github release. Unzip it in ~/Documents

  3. Install dependencies, disconnect all realsense camera starting from here:

    1
    2
    3
    
    sudo apt install libssl-dev libusb-1.0-0-dev libudev-dev pkg-config libgtk-3-dev
    sudo apt install git wget cmake build-essential
    sudo apt install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev at
    
  4. Setup udev rules manually:

    1
    2
    
    cd ~/Documents/librealsense-xxx
    ./scripts/setup_udev_rules.sh
    
  5. We WILL use the RSUSB backend (due to some reason v4l backend does not work, might be the fact that macos uses libuvc.). So you MUST patch the linux kernel for it to work.

  6. Before you patch it, you need to change the url in the patch script. The old link no longer works and the git will hang on that line.

    Open ./scripts/patch-realsense-ubuntu-lts.sh line 107. change the git repo address to this:

    1
    
    git remote add origin https://kernel.ubuntu.com/git/ubuntu/ubuntu-${ubuntu_codename}.git
    
  7. Run the patch-realsense-ubuntu-lts.sh, say y if the patch hunk prompts assume -R (you will understand when you see it). You should have a success at the end.

  8. Now proceed to build the librealsense2, note the cmake flags used:

    1
    2
    3
    
    cd ~/Documents/librealsense-xxxx
    mkdir build && cd build
    cmake ../ -DBUILD_EXAMPLES=true -DCMAKE_BUILD_TYPE=Release -DFORCE_RSUSB_BACKEND=true
    
  9. Make and install the lib:

    1
    2
    3
    
    sudo make uninstall && make clean
    make -j10
    sudo make install
    
  10. You should have a working librealsense now! Restart and try:

    1
    
    realsense-viewer
    

💡 Refer to below for a bit more details on working in ROS with librealsense

Also…

🍎 On a M2 Pro macbook pro 14 inch, a full battery will last about 2 hours of running realsense and cpu util ~ 3 full cores.

💡 You might see quite a bit of errors when you connect the camera, but the error message is actually “Success”. It might due to the different way libusb handles the error. Don’t worry about it. If things don’t work, replug and try again.

ROS1 wrapper with compiled librealsense

  1. You might need to point it to the right location to look for the compiled librealsense. Change the makefile of realsense2-camera to add librealsense dir variable. Search online for more details or shoot me an email.

  2. When starting up, I’m starting two cameras separately so that it does not throw an error for the d400. Specifically, run these two in different terminals (might wanna change the channel name in launch file so there is no conflict too):

    1
    2
    
    roslaunch realsense2_camera rs-camera
    roslaunch realsense2_camera rs-t265
    
  3. You will NOT have access to pointclouds, but you can still get the depth frames. It’s also pretty easy to convert depth frames to pointclouds too.

    • Check parameters of intrinsics and extrinsics between color and depth by:

      1
      
      rs-enumerate-devices -c
      
    Callback and Converting depth to point cloud:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    
    // convert d455 depth image to point cloud array in eigen.
    // please find the camera intrinsics from 'rs-enumerate-devices -c'
    Eigen::MatrixXf DepthImageToPointCloud(const cv_bridge::CvImageConstPtr& cv_ptr, float fx, float fy, float ppx, float ppy) {
        // Extract the image dimensions
        int imageWidth = cv_ptr->image.cols;
        int imageHeight = cv_ptr->image.rows;
    
        Eigen::MatrixXf pointCloud(imageWidth * imageHeight, 3);
    
        int index = 0;
        for (int v = 0; v < imageHeight; ++v) {
            for (int u = 0; u < imageWidth; ++u) {
                uint16_t depthValue = cv_ptr->image.at<uint16_t>(v, u);
    
                // Convert depth from millimeters to meters
                // Adjust the divisor according to your camera's specifications
                float depth = static_cast<float>(depthValue) / 1000.0f;
    
                // Check for invalid or NaN depth
                if (depth <= 0 || std::isnan(depth)) {
                    continue;
                }
    
                // Convert to 3D point
                float x = (u - ppx) * depth / fx;
                float y = (v - ppy) * depth / fy;
                float z = depth;
    
                // Store in point cloud
                pointCloud(index, 0) = x;
                pointCloud(index, 1) = y;
                pointCloud(index, 2) = z;
                index++;
            }
        }
    
        // Resize the point cloud to the actual number of valid points
        pointCloud.conservativeResize(index, Eigen::NoChange);
    
        return pointCloud;
    }
    
    void handleCamera(const sensor_msgs::ImageConstPtr& msg){
        //ROS_INFO("Image received");
        double t = msg->header.stamp.toSec();
        cv_bridge::CvImageConstPtr cv_ptr;
        try {
            cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_16UC1);
            ROS_INFO("copied image");
        }
        catch (cv_bridge::Exception& e) {
            ROS_ERROR("cv_bridge exception: %s", e.what());
            return;
        }
    
        Eigen::MatrixXf cloud_mat;
        // intrinsic for 848x480
        cloud_mat = DepthImageToPointCloud(cv_ptr, 431.0087890625, 431.0087890625, 429.328704833984, 242.162155151367);
    
        printf("%d %d cloud_mat\n", cloud_mat.rows(), cloud_mat.cols());
    
        // Create a new Mat to hold the normalized image
        cv::Mat normalized_img;
    
        // Normalize the image to the range [0, 255] with upper bound at 8000mm
        // Values above 8000mm will be set to 255 (white), and below 0 to 0 (black)
        cv::normalize(cv_ptr->image, normalized_img, 0, 255, cv::NORM_MINMAX, CV_8UC1, cv::Mat(cv_ptr->image <= 8000));
    
    
        std::string OPENCV_WINDOW = "Camera";
        cv::imshow(OPENCV_WINDOW, normalized_img);
        cv::waitKey(1);
    }
    
  4. Hopefully you are good to go now!

  1. Full build flags for reference:

bookmark

  1. Official linux build guide:

bookmark