In GTK, how are screen coordinates mapped to GUI objects?



I am trying to gain a very technical understanding of how operating systems and GUI systems (like GTK) scalably map coordinates on the 2d screen to objects.  For example, there are many applications running with graphical interfaces to them active on the screen, and each application interface has dozens (if not hundreds) of objects (buttons, scroll bars, etc.).  Some objects are contained within other objects.

So, what I am trying to learn more about is how something like GTK maps a screen coordinate (and an action like a mouse click) to an object quickly to generate a callback to the appropriate application or object.  In particular, when there are objects within objects, how it narrows the search down to the "smallest" element that your mouse is hovering over when a click event fired.

I'd love to know about what data structures and algorithms are used to perform this mapping between the 2D graphical space to objects. 

Thanks!
George


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]