Query Regarding GNU TeXmacs Compatibility with Apple Vision Pro & Possible New Features

Dear TeXmacs Community, Developers and Contributors,

I hope this message finds you well. I am writing to inquire about the potential compatibility of GNU TeXmacs with Apple’s recently announced Vision Pro headset.

As many of you are aware, the Vision Pro is Apple’s foray into spatial computing, a promising new field in the world of technology. Given the potential for this device to redefine how we interact with digital content, I am curious about the possibility of using GNU TeXmacs on this platform.

TeXmacs has been a vital tool for me and many others in the scientific and academic community. Its capabilities as a scientific word processor and typesetting component have made it indispensable for creating high-quality, professional-looking documents.

Apple is actively encouraging developers to build applications for the Vision Pro and is offering compatibility evaluations for existing iPad and iPhone apps. As such, I was wondering whether there are any plans or considerations for making GNU TeXmacs compatible with the Vision Pro.

I understand that such a task would require a significant amount of work and resources, and I appreciate all the efforts the team puts into maintaining and improving TeXmacs. My curiosity stems from the exciting prospect of being able to use TeXmacs in a spatial computing environment.

Thank you for taking the time to read my message. Any information or insights you could provide would be greatly appreciated.

Best regards,


Dear Friends,

I recently came across a fascinating conversation that piqued my interest. The topic? The potential benefits of developing a 3D mixed-reality version of TeXmacs, specifically for Apple’s upcoming AR-VR headset - the Apple Vision Pro. You can dive into the discussion in more detail here.

The idea is intriguing, especially when we consider the unique capabilities of the Apple Vision Pro. Its mixed-reality environment, infinite canvas, and high-resolution micro-OLED display offer a digital experience that transcends traditional computing. Imagine the world of mathematical symbols, formulae, and diagrams not merely confined to the flat surface of a screen, but alive in three dimensions around us!

Now, let’s envision the revolutionary features that a 3D TeXmacs could offer (imagined by ChatGPT):

  1. Immersive 3D Visualization : TeXmacs could transform 2D mathematical graphs and diagrams into immersive 3D models. This would aid in comprehending complex mathematical concepts and relationships in a more intuitive and interactive manner. With the infinite canvas that the Vision Pro provides, we could develop a 3D graphical interface for TeXmacs, enabling users to manipulate mathematical expressions with their hands, zoom in and out of 3D plots, and even walk around large diagrams to better understand their structure.

  2. Gesture Editing : By taking advantage of the Vision Pro’s hand-tracking capabilities, users could manipulate mathematical symbols and equations in 3D space using their hands. This would make the editing process more natural and intuitive.

  3. Infinite Workspace : The infinite canvas provided by the Vision Pro could be used to create an extensive workspace for arranging and comparing multiple mathematical documents side by side. Put pages everywhere. We can design Apple Vision Pro version TeXmacs such that page n can have m copies which can be put on the walls, the doors, the floor, the ceil or in the air.

  4. Immersive Collaboration : Collaborative work could reach new heights. Imagine working on a complex equation with colleagues from around the world, all of you interacting with the same 3D mathematical models in real time.

  5. Spatial Navigation : Navigating through a complex mathematical document would be as simple as looking around. By simply gazing at a specific section, users could instantly navigate to it.

  6. Spatial Document Management : Imagine having multiple TeXmacs documents open around you in the same 3D space, facilitating easy comparison and cross-reference. This could be a game-changer for large projects or collaborative research.

  7. Eye-Drawn Graphs : The user could use their gaze to plot points on a 3D grid. As the user’s gaze moves, the system could draw a line following the gaze path, creating a graph. This could be used to sketch rough graphs quickly or to create visual approximations of mathematical functions. For 3D models or graphs, users could adjust parameters or manipulate the model using their gaze. For example, looking at a certain part of a 3D model could highlight it, and then users could use a predefined eye movement to adjust it.

  8. Eye-Activated Commands : Certain commands or features could be activated by eye movements. For example, a quick double-blink could be used to undo the last action, or a long gaze at a specific tool could open its options menu.

  9. Focus-Based Learning : The software could track where the user is looking most often and use this information to adapt learning materials. If a user spends a lot of time looking at a certain concept or equation, the system could provide more examples or explanations related to that concept.

  10. Gaze-Based Prompts : When the user’s gaze rests on a particular tool or function, the software could provide a brief description or tip related to that feature.

  11. Attention-Based Alerts : If the software detects that the user’s attention is drifting away from the screen, it could send gentle alerts to bring the focus back. This could be particularly useful during long study or work sessions.

  12. Eye Movement Analysis : By analyzing the user’s eye movements, the software could provide insights into their reading and problem-solving strategies. This data could be used to offer personalized tips and improvements.

  13. Smart Highlighting : When the user’s gaze rests on a particular section of text or an equation for a certain length of time, the software could automatically highlight that section. This could be useful for reviewing important parts of a document later.

  14. Attention Heatmaps : The software could create heatmaps showing where users focused their gaze during a session. This could provide insights into which parts of the document or software are most engaging or require the most attention.

  15. Layered Document Viewing : As you suggested, users could place documents in a layered fashion, with one page in front of the other. This would allow users to easily switch between documents by simply shifting their focus from one layer to the next. By bringing a document or a page closer to the front, the software could automatically minimize distractions, dimming other pages or documents and focusing the user’s attention on the selected item.

  16. Depth-Based Organization : Users could organize their workspace based on depth. For example, current tasks could be placed in the foreground, while less urgent tasks could be placed further back. This would provide a visual representation of task priority and help users manage their work more efficiently.

  17. 3D Mind Maps : Users could create mind maps or concept maps in three dimensions, with different layers representing different levels of detail or different aspects of a topic.

  18. Depth-Sensitive Search : Users could perform a search that prioritizes results based on their depth. For example, a search could first show results from documents in the foreground and then show results from documents further back.

  19. 3D Annotations : Annotations could also exist in three dimensions. For example, users could place comments or notes above, below, or beside a document in the 3D space.

  20. Angle-Based Layers : Different layers or sections of a document could be viewed from different angles. For example, a user could switch from viewing the main text of a document to viewing its annotations or references by changing their viewing angle.

  21. 3D Reference Visualization : When a user highlights a citation or a reference in the document, the corresponding source or equation could appear in 3D space next to the reference. This would allow users to quickly view referenced content without having to navigate away from their current location in the document.

  22. Gesture-Based Reference Navigation : Users could use hand gestures or eye movements to navigate to referenced equations or bibliography items. For example, a quick hand swipe or gaze hold on a citation could take the user to the corresponding item in the bibliography.

  23. Spatial Reference Organization : References could be organized spatially around the document. Users could place frequently accessed references closer for quick access, while less frequently accessed references could be placed further away.

  24. Dynamic Reference Highlighting : When users are reading a section of text, citations and references in that section could automatically be highlighted or brought to the front. This would make it easier for users to recognize and access referenced content.

  25. Reference Previews : Users could get a quick preview of a reference (be it a citation, equation, or figure) by hovering their hand over it or by focusing their gaze on it. This would allow users to quickly check references without navigating away from their current location.

  26. 3D Hierarchical Bracket Interaction : In a 3D version of TeXmacs on the Apple Vision Pro, we could introduce a dynamic 3D hierarchical bracket visualization system. This system would visually represent different levels of brackets in mathematical equations at varying depths in the 3D space, creating an intuitive layered structure. Users could interact with these brackets through gaze, gestures, or voice commands. Highlighting a bracket would automatically bring its pair forward, simplifying the task of identifying corresponding brackets. Additionally, users could navigate and manipulate the bracketed layers through depth-based navigation and gesture-controlled operations, providing an immersive and intuitive way to comprehend and work with complex mathematical equations.

I believe that these features could significantly enhance the way we work with TeXmacs and math in general. However, this is just the tip of the iceberg. The potential of mixed reality in the field of mathematical software is vast and largely unexplored. I invite you all to brainstorm and share your innovative ideas on how we could further leverage the power of the Apple Vision Pro to bring new features to TeXmacs.

Looking forward to hearing your thoughts.

Best regards,

What about viewing a 2D TeXmacs document as a 3D real-time strategy game where you deploy various units to work on different aspects of the document?

The terrain would represent the 2D document while various 3D units would work on the terrain to write the document.

Some of the units could be bots (e.g., an AI bot to update the introduction in real-time as the rest of the document changes, etc.) while other units could be your human collaborators.

Units could work on the document concurrently in real-time and a 3d view would show them working much as they would in a real-time strategy game.