Uploaded image for project: ' AGL Development'
  1. AGL Development
  2. SPEC-5072

[RFC] Further improvements and functionality for the AGL compositor

XMLWordPrintable

    • Icon: Improvement Improvement
    • Resolution: Fixed
    • Icon: Minor Minor
    • None
    • None
    • None
    • None

      This Jira task is a placeholder for discussing/reviewing some of the functionality present in ivi-shell and which are useful, and possibly required, for switching from ivi-shell.

      1. Layering system/management

      One thing that was pointed by out jwinarske in our IVI PR call, was the layering system where the scene-graph is composed from multiple layers, each application having been placed in such a layer and presumably the ability to manage those layers. There's some kind of policy decision mechanism which decides which applications goes into what layer. I assume this is something internal and out of the scope for AGL.

      The layers functionality in wayland-ivi-extension and with it ivi-shell, is internally using the same API that we have in the AGL compositor when using the libweston API. With the AGL compositor none of that is exposed, but instead the compositor shuffles the applications when a new application is activated/started. We have internally a a bunch of layers that denotes the scene-graph for the applications, and depending on its internal roles the apps gets placed into one of those layers. Reactivation/placing/movement on outputs means removed and inserting into other layers. Similarly the same thing happens with wayland-ivi-extensions/ivi-shell where apps and layers are identified by integer numbers.

      To also provide some context on how we have things currently:

      In AGL, we have a surface/view for each application meaning that we automatically map, 1-to-1, a surface to a application which gets into a layer. Also the apps are maximized, occupying a well-defined space within the output dimensions. There are other apps roles, which can be used to manipulate the app, making it either free floating, where you can position the app, or fullscreen which covers the entire output. All of this can be controlled using the gRPC API or using a wayland protocol.

      I suspect that the use cases pointed out by Joel, might be a bit different, where placement and composition of the scene-graph is controlled by an external entity based possibly on certain policies. It is not immediately obvious to me why exposing the layers and, further more, the ability to interact with them necessary. jwinarske some feedback on the usage and use-cases might help understand a bit more the situation and why access to layers is something needed.

      I think that dynamically creating and manipulating applications and further on, do the same with layers, and in the same time have way to expose that, is probably best handled by SPEC-3436, detailed as well in https://confluence.automotivelinux.org/pages/viewpage.action?pageId=102170808, in slide 10.

      2. Input propagation and dissemination

      Another thing pointed out is the ability to distribute input events to multiple applications at the same time. The use case, in our situation is that we will have (keyboard) input focus to the currently active window, even in split window type of situations. This also looks a like something SPEC-3436 might be able to handled by having the ability to customize this.

      3. HUDs

      The other issue, also brought on by jwinarske displaying apps on HUDs has its own task, in SPEC-4910.

      /cc jwinarske waltminer scottm

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

            mvlad Marius Vlad
            mvlad Marius Vlad
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: