We introduce ScreenAI, a vision-language mannequin for consumer interfaces and infographics that achieves state-of-the-art outcomes on UI and infographics-based duties. We’re additionally releasing three new datasets: Display Annotation to judge the format understanding functionality of the mannequin, in addition to ScreenQA Quick and Complicated ScreenQA for a extra complete analysis of its QA functionality.
Display consumer interfaces (UIs) and infographics, resembling charts, diagrams and tables, play vital roles in human communication and human-machine interplay as they facilitate wealthy and interactive consumer experiences. UIs and infographics share comparable design rules and visible language (e.g., icons and layouts), that provide a possibility to construct a single mannequin that may perceive, motive, and work together with these interfaces. Nevertheless, due to their complexity and different presentation codecs, infographics and UIs current a singular modeling problem.
To that finish, we introduce “ScreenAI: A Imaginative and prescient-Language Mannequin for UI and Infographics Understanding”. ScreenAI improves upon the PaLI structure with the versatile patching technique from pix2struct. We prepare ScreenAI on a singular combination of datasets and duties, together with a novel Display Annotation process that requires the mannequin to determine UI component info (i.e., kind, location and outline) on a display screen. These textual content annotations present massive language fashions (LLMs) with display screen descriptions, enabling them to routinely generate question-answering (QA), UI navigation, and summarization coaching datasets at scale. At solely 5B parameters, ScreenAI achieves state-of-the-art outcomes on UI- and infographic-based duties (WebSRC and MoTIF), and best-in-class efficiency on Chart QA, DocVQA, and InfographicVQA in comparison with fashions of comparable dimension. We’re additionally releasing three new datasets: Display Annotation to judge the format understanding functionality of the mannequin, in addition to ScreenQA Quick and Complicated ScreenQA for a extra complete analysis of its QA functionality.
ScreenAI
ScreenAI’s structure relies on PaLI, composed of a multimodal encoder block and an autoregressive decoder. The PaLI encoder makes use of a imaginative and prescient transformer (ViT) that creates picture embeddings and a multimodal encoder that takes the concatenation of the picture and textual content embeddings as enter. This versatile structure permits ScreenAI to resolve imaginative and prescient duties that may be recast as textual content+image-to-text issues.
On high of the PaLI structure, we make use of a versatile patching technique launched in pix2struct. As an alternative of utilizing a fixed-grid sample, the grid dimensions are chosen such that they protect the native side ratio of the enter picture. This permits ScreenAI to work effectively throughout pictures of varied side ratios.
The ScreenAI mannequin is educated in two levels: a pre-training stage adopted by a fine-tuning stage. First, self-supervised studying is utilized to routinely generate information labels, that are then used to coach ViT and the language mannequin. ViT is frozen in the course of the fine-tuning stage, the place most information used is manually labeled by human raters.
Knowledge technology
To create a pre-training dataset for ScreenAI, we first compile an in depth assortment of screenshots from varied units, together with desktops, cellular, and tablets. That is achieved by utilizing publicly accessible internet pages and following the programmatic exploration method used for the RICO dataset for cellular apps. We then apply a format annotator, primarily based on the DETR mannequin, that identifies and labels a variety of UI parts (e.g., picture, pictogram, button, textual content) and their spatial relationships. Pictograms endure additional evaluation utilizing an icon classifier able to distinguishing 77 totally different icon sorts. This detailed classification is crucial for deciphering the refined info conveyed by means of icons. For icons that aren’t lined by the classifier, and for infographics and pictures, we use the PaLI picture captioning mannequin to generate descriptive captions that present contextual info. We additionally apply an optical character recognition (OCR) engine to extract and annotate textual content material on display screen. We mix the OCR textual content with the earlier annotations to create an in depth description of every display screen.
LLM-based information technology
We improve the pre-training information’s variety utilizing PaLM 2 to generate input-output pairs in a two-step course of. First, display screen annotations are generated utilizing the method outlined above, then we craft a immediate round this schema for the LLM to create artificial information. This course of requires immediate engineering and iterative refinement to search out an efficient immediate. We assess the generated information’s high quality by means of human validation in opposition to a high quality threshold.
You solely communicate JSON. Don't write textual content that isn’t JSON.
You might be given the next cellular screenshot, described in phrases. Are you able to generate 5 questions concerning the content material of the screenshot in addition to the corresponding brief solutions to them?
The reply must be as brief as potential, containing solely the required info. Your reply must be structured as follows:
questions: [
{{question: the question,
answer: the answer
}},
...
]
{THE SCREEN SCHEMA}
By combining the pure language capabilities of LLMs with a structured schema, we simulate a variety of consumer interactions and situations to generate artificial, practical duties. Particularly, we generate three classes of duties:
- Query answering: The mannequin is requested to reply questions concerning the content material of the screenshots, e.g., “When does the restaurant open?”
- Display navigation: The mannequin is requested to transform a pure language utterance into an executable motion on a display screen, e.g., “Click on the search button.”
- Display summarization: The mannequin is requested to summarize the display screen content material in a single or two sentences.
Experiments and outcomes
As beforehand talked about, ScreenAI is educated in two levels: pre-training and fine-tuning. Pre-training information labels are obtained utilizing self-supervised studying and fine-tuning information labels comes from human raters.
We fine-tune ScreenAI utilizing public QA, summarization, and navigation datasets and quite a lot of duties associated to UIs. For QA, we use effectively established benchmarks within the multimodal and doc understanding area, resembling ChartQA, DocVQA, Multi web page DocVQA, InfographicVQA, OCR VQA, Net SRC and ScreenQA. For navigation, datasets used embrace Referring Expressions, MoTIF, Mug, and Android within the Wild. Lastly, we use Screen2Words for display screen summarization. Together with the fine-tuning datasets, we consider the fine-tuned ScreenAI mannequin utilizing three novel benchmarks:
- Display Annotation: Allows the analysis mannequin format annotations and spatial understanding capabilities.
- ScreenQA Quick: A variation of ScreenQA, the place its floor fact solutions have been shortened to include solely the related info that higher aligns with different QA duties.
- Complicated ScreenQA: Enhances ScreenQA Quick with harder questions (counting, arithmetic, comparability, and non-answerable questions) and incorporates screens with varied side ratios.
The fine-tuned ScreenAI mannequin achieves state-of-the-art outcomes on varied UI and infographic-based duties (WebSRC and MoTIF) and best-in-class efficiency on Chart QA, DocVQA, and InfographicVQA in comparison with fashions of comparable dimension. ScreenAI achieves aggressive efficiency on Screen2Words and OCR-VQA. Moreover, we report outcomes on the brand new benchmark datasets launched to function a baseline for additional analysis.
Subsequent, we study ScreenAI’s scaling capabilities and observe that throughout all duties, rising the mannequin dimension improves performances and the enhancements haven’t saturated on the largest dimension.
Conclusion
We introduce the ScreenAI mannequin together with a unified illustration that allows us to develop self-supervised studying duties leveraging information from all these domains. We additionally illustrate the impression of information technology utilizing LLMs and examine bettering mannequin efficiency on particular elements with modifying the coaching combination. We apply all of those methods to construct multi-task educated fashions that carry out competitively with state-of-the-art approaches on numerous public benchmarks. Nevertheless, we additionally be aware that our method nonetheless lags behind massive fashions and additional analysis is required to bridge this hole.
Acknowledgements
This undertaking is the results of joint work with Maria Wang, Fedir Zubach, Hassan Mansoor, Vincent Etter, Victor Carbune, Jason Lin, Jindong Chen and Abhanshu Sharma. We thank Fangyu Liu, Xi Chen, Efi Kokiopoulou, Jesse Berent, Gabriel Barcik, Lukas Zilka, Oriana Riva, Gang Li,Yang Li, Radu Soricut, and Tania Bedrax-Weiss for his or her insightful suggestions and discussions, together with Rahul Aralikatte, Hao Cheng and Daniel Kim for his or her help in information preparation. We additionally thank Jay Yagnik, Blaise Aguera y Arcas, Ewa Dominowska, David Petrou, and Matt Sharifi for his or her management, imaginative and prescient and help. We’re very grateful toTom Small for serving to us create the animation on this put up.
We introduce ScreenAI, a vision-language mannequin for consumer interfaces and infographics that achieves state-of-the-art outcomes on UI and infographics-based duties. We’re additionally releasing three new datasets: Display Annotation to judge the format understanding functionality of the mannequin, in addition to ScreenQA Quick and Complicated ScreenQA for a extra complete analysis of its QA functionality.
Display consumer interfaces (UIs) and infographics, resembling charts, diagrams and tables, play vital roles in human communication and human-machine interplay as they facilitate wealthy and interactive consumer experiences. UIs and infographics share comparable design rules and visible language (e.g., icons and layouts), that provide a possibility to construct a single mannequin that may perceive, motive, and work together with these interfaces. Nevertheless, due to their complexity and different presentation codecs, infographics and UIs current a singular modeling problem.
To that finish, we introduce “ScreenAI: A Imaginative and prescient-Language Mannequin for UI and Infographics Understanding”. ScreenAI improves upon the PaLI structure with the versatile patching technique from pix2struct. We prepare ScreenAI on a singular combination of datasets and duties, together with a novel Display Annotation process that requires the mannequin to determine UI component info (i.e., kind, location and outline) on a display screen. These textual content annotations present massive language fashions (LLMs) with display screen descriptions, enabling them to routinely generate question-answering (QA), UI navigation, and summarization coaching datasets at scale. At solely 5B parameters, ScreenAI achieves state-of-the-art outcomes on UI- and infographic-based duties (WebSRC and MoTIF), and best-in-class efficiency on Chart QA, DocVQA, and InfographicVQA in comparison with fashions of comparable dimension. We’re additionally releasing three new datasets: Display Annotation to judge the format understanding functionality of the mannequin, in addition to ScreenQA Quick and Complicated ScreenQA for a extra complete analysis of its QA functionality.
ScreenAI
ScreenAI’s structure relies on PaLI, composed of a multimodal encoder block and an autoregressive decoder. The PaLI encoder makes use of a imaginative and prescient transformer (ViT) that creates picture embeddings and a multimodal encoder that takes the concatenation of the picture and textual content embeddings as enter. This versatile structure permits ScreenAI to resolve imaginative and prescient duties that may be recast as textual content+image-to-text issues.
On high of the PaLI structure, we make use of a versatile patching technique launched in pix2struct. As an alternative of utilizing a fixed-grid sample, the grid dimensions are chosen such that they protect the native side ratio of the enter picture. This permits ScreenAI to work effectively throughout pictures of varied side ratios.
The ScreenAI mannequin is educated in two levels: a pre-training stage adopted by a fine-tuning stage. First, self-supervised studying is utilized to routinely generate information labels, that are then used to coach ViT and the language mannequin. ViT is frozen in the course of the fine-tuning stage, the place most information used is manually labeled by human raters.
Knowledge technology
To create a pre-training dataset for ScreenAI, we first compile an in depth assortment of screenshots from varied units, together with desktops, cellular, and tablets. That is achieved by utilizing publicly accessible internet pages and following the programmatic exploration method used for the RICO dataset for cellular apps. We then apply a format annotator, primarily based on the DETR mannequin, that identifies and labels a variety of UI parts (e.g., picture, pictogram, button, textual content) and their spatial relationships. Pictograms endure additional evaluation utilizing an icon classifier able to distinguishing 77 totally different icon sorts. This detailed classification is crucial for deciphering the refined info conveyed by means of icons. For icons that aren’t lined by the classifier, and for infographics and pictures, we use the PaLI picture captioning mannequin to generate descriptive captions that present contextual info. We additionally apply an optical character recognition (OCR) engine to extract and annotate textual content material on display screen. We mix the OCR textual content with the earlier annotations to create an in depth description of every display screen.
LLM-based information technology
We improve the pre-training information’s variety utilizing PaLM 2 to generate input-output pairs in a two-step course of. First, display screen annotations are generated utilizing the method outlined above, then we craft a immediate round this schema for the LLM to create artificial information. This course of requires immediate engineering and iterative refinement to search out an efficient immediate. We assess the generated information’s high quality by means of human validation in opposition to a high quality threshold.
You solely communicate JSON. Don't write textual content that isn’t JSON.
You might be given the next cellular screenshot, described in phrases. Are you able to generate 5 questions concerning the content material of the screenshot in addition to the corresponding brief solutions to them?
The reply must be as brief as potential, containing solely the required info. Your reply must be structured as follows:
questions: [
{{question: the question,
answer: the answer
}},
...
]
{THE SCREEN SCHEMA}
By combining the pure language capabilities of LLMs with a structured schema, we simulate a variety of consumer interactions and situations to generate artificial, practical duties. Particularly, we generate three classes of duties:
- Query answering: The mannequin is requested to reply questions concerning the content material of the screenshots, e.g., “When does the restaurant open?”
- Display navigation: The mannequin is requested to transform a pure language utterance into an executable motion on a display screen, e.g., “Click on the search button.”
- Display summarization: The mannequin is requested to summarize the display screen content material in a single or two sentences.
Experiments and outcomes
As beforehand talked about, ScreenAI is educated in two levels: pre-training and fine-tuning. Pre-training information labels are obtained utilizing self-supervised studying and fine-tuning information labels comes from human raters.
We fine-tune ScreenAI utilizing public QA, summarization, and navigation datasets and quite a lot of duties associated to UIs. For QA, we use effectively established benchmarks within the multimodal and doc understanding area, resembling ChartQA, DocVQA, Multi web page DocVQA, InfographicVQA, OCR VQA, Net SRC and ScreenQA. For navigation, datasets used embrace Referring Expressions, MoTIF, Mug, and Android within the Wild. Lastly, we use Screen2Words for display screen summarization. Together with the fine-tuning datasets, we consider the fine-tuned ScreenAI mannequin utilizing three novel benchmarks:
- Display Annotation: Allows the analysis mannequin format annotations and spatial understanding capabilities.
- ScreenQA Quick: A variation of ScreenQA, the place its floor fact solutions have been shortened to include solely the related info that higher aligns with different QA duties.
- Complicated ScreenQA: Enhances ScreenQA Quick with harder questions (counting, arithmetic, comparability, and non-answerable questions) and incorporates screens with varied side ratios.
The fine-tuned ScreenAI mannequin achieves state-of-the-art outcomes on varied UI and infographic-based duties (WebSRC and MoTIF) and best-in-class efficiency on Chart QA, DocVQA, and InfographicVQA in comparison with fashions of comparable dimension. ScreenAI achieves aggressive efficiency on Screen2Words and OCR-VQA. Moreover, we report outcomes on the brand new benchmark datasets launched to function a baseline for additional analysis.
Subsequent, we study ScreenAI’s scaling capabilities and observe that throughout all duties, rising the mannequin dimension improves performances and the enhancements haven’t saturated on the largest dimension.
Conclusion
We introduce the ScreenAI mannequin together with a unified illustration that allows us to develop self-supervised studying duties leveraging information from all these domains. We additionally illustrate the impression of information technology utilizing LLMs and examine bettering mannequin efficiency on particular elements with modifying the coaching combination. We apply all of those methods to construct multi-task educated fashions that carry out competitively with state-of-the-art approaches on numerous public benchmarks. Nevertheless, we additionally be aware that our method nonetheless lags behind massive fashions and additional analysis is required to bridge this hole.
Acknowledgements
This undertaking is the results of joint work with Maria Wang, Fedir Zubach, Hassan Mansoor, Vincent Etter, Victor Carbune, Jason Lin, Jindong Chen and Abhanshu Sharma. We thank Fangyu Liu, Xi Chen, Efi Kokiopoulou, Jesse Berent, Gabriel Barcik, Lukas Zilka, Oriana Riva, Gang Li,Yang Li, Radu Soricut, and Tania Bedrax-Weiss for his or her insightful suggestions and discussions, together with Rahul Aralikatte, Hao Cheng and Daniel Kim for his or her help in information preparation. We additionally thank Jay Yagnik, Blaise Aguera y Arcas, Ewa Dominowska, David Petrou, and Matt Sharifi for his or her management, imaginative and prescient and help. We’re very grateful toTom Small for serving to us create the animation on this put up.