Enhancing Textbooks with Visuals from the Web for Improved Learning

Textbooks are one of the main mediums for delivering high-quality education to students. In particular, explanatory and illustrative visuals play a key role in retention, comprehension and general transfer of knowledge. However, many textbooks lack these interesting visuals to support student learning. In this paper, we investigate the effectiveness of vision-language models to automatically enhance textbooks with images from the web. We collect a dataset of e-textbooks in the math, science, social science and business domains. We then set up a text-image matching task that involves retrieving and appropriately assigning web images to textbooks, which we frame as a matching optimization problem. Through a crowd-sourced evaluation, we verify that (1) while the original textbook images are rated higher, automatically assigned ones are not far behind, and (2) the precise formulation of the optimization problem matters. We release the dataset of textbooks with an associated image bank to inspire further research in this intersectional area of computer vision and NLP for education.


Introduction
Students use textbooks as one of the primary mediums of learning.It is thus imperative that textbooks are designed to provide a rich learning and engaging environment.Visuals enhance learning through a number of means, including the ability to retain information, as well as its ability to promote comprehension and knowledge transfer (Carney and Levin, 2002;Dimopoulos et al., 2003;Katsioloudis, 2007;Hibbing and Rankin-Erickson, 2008;Panjwani et al., 2009;Mayer, 2019).Automatic approaches for retrieving and assigning images from the web to textbook chapters can therefore assist textbook designers in the creation of better textbooks.However, this is a very challenging task The second subsection in the bad assignment has a related image but without a strong connection.Likewise, the last subsection has a picture of Calvin, which is related but does not have a high pedagogical value.
as an ideal illustrative visual should not only be related to the textbook material, but also have pedagogical value (see Figure 1).Agrawal et al. (2010Agrawal et al. ( , 2011) ) already enhance textbooks with images from the Internet.They use similarity scores of textual captions from images and the text present in the textbooks for image assignment.This approach requires the image captions to be available.Beyond the problem of availability, the captions are also highly context dependent which reduces their utility in our setting.In this work, we aim to reinvigorate this area, and present and analyse a real dataset of textbooks.We do so by setting up the problem of image assignment using textbook and Wikipedia images.
In contrast to previous work, we rely on the recent advances in vision-language models, such as CLIP (Radford et al., 2021) and DALL-E (Ramesh et al., 2021).We analyze our dataset to gain insights into the organization of concepts and illustrative images within the textbooks.This analysis inspires the formulation of a new optimization problem focused on modeling of illustrations in textbooks.The solution to this problem maximizes the coverage of the illustrations while minimizing redundancy during image-to-paragraph assigments.Our approach uses CLIP to retrieve and appropriately assign images to long-form text, particularly a textbook section.Then, the overall assignment is obtained in following stages: 1) Given a piece of text, produce a set of concepts that should be addressed by a visualization, 2) Given an image and a concept, determine their mutual relevance, and 3) Given a piece of text (e.g., a textbook section), produce an adequate assignment of images.
Each of the three sub-problems listed above can be solved with a variety of approaches -indeed we explore several variants, which we describe in Section 4. Because of the modularity of this approach, each of the sub-problems can be improved independently and adapted to the dataset in future work.Overall, we contribute the following: • A dataset that contains text and images drawn from 35 textbooks covering math, business, social sciences, and science in addition to a secondary image bank of ∼312K images taken from Wikipedia (Section 3).• Formalization of multiple textbook enrichment optimization goals (Section 4).• Human evaluation and an in-depth examination of the possible failure modes and challenges of the proposed methods (Section 5).

Related Works
Vision Language Models: Alignment between texts and images has seen rapid progress recently with models such as CLIP (Radford et al., 2021) and DALL-E (Ramesh et al., 2021).However, their usages are limited to a short and specific text prompt to which the performance is usually quite sensitive.We focus on the problem of retrieving images for very long textual inputs, specifically a textbook section, where it is unclear which part of the text specifically describes the relevant image.
Image Text Matching for Long Texts: Recently, Wang et al. (2022a,b); Zeng et al. (2022) trained better language-vision representations with more nuanced associations, such as multiple vision tasks or finer image-text alignments.However, this progress is mainly confined to datasets, like MS-COCO (Lin et al., 2014) and Flickr-30K (Young et al., 2014), that contain natural images and their captions, which are rather short.Additionally, Schneider et al. (2021) show that current multimodal models perform poorly at retrieving relevant images for longer and more complex textual inputs.The reason for this poor performance is the pre-training on shorter and very specific image captions.This is a strong requirement to our work, which is focused on even longer text inputs.We explore the problem of assigning images to lengthy text, which highlights issues such as ensuring comprehensive coverage of concepts and avoiding redundant image illustrations.To move even closer to a real-world setting, we perform this task with actual textbooks and a human study.
Enriching Text with Images: The task of textbook enrichment was first explored by Agrawal et al. (2010Agrawal et al. ( , 2011)), who assume that web images have associated relevant captions.We note that the caption of images are largely dependent on the context where the image was originally assigned. 1he language of the caption may not even match the textbook's language.To alleviate all this, we do not assume that the images have associated relevant captions.Seo et al. (2015); Kembhavi et al. (2017); Lee et al. ( 2022) also studied associating images with textual information.However, their primary goal has largely been to comprehend the image content, and thus differs from our objective.Finally, there has been more past work on NLP applied to textbooks (Sachan and Xing, 2017;Sachan et al., 2017Sachan et al., , 2020)).However, the goal of these works also differ significantly from ours.

Dataset
We now present the process of curation and structure of our dataset with an analysis.

Data Collection
OpenStax Books.We source both the text and assigned images from 35 textbooks from an online textbook publisher openstax.org,covering four subjects: business, social sciences, sciences, and maths.Each textbook is organized into chapters, sections, subsections, and paragraphs.See their distribution in our dataset in Table 1.For each sub-section of the textbook,2 we identify the following key elements: • Text: Raw text from the subsection.
• Phrases: Raw text from the subsection decomposed with overlapping sliding windows.Wikipedia images.To mimic the task of textbook enrichment, we use a dataset of images from Wikipedia that are relevant to the concepts in the OpenStax Books dataset.This dataset serves as a proxy for images from the web.We search for relevant Wikipedia articles for each concept, with a maximum of 20 articles retrieved per concept.From these articles, we extract images and their captions by searching the article in the WIT dataset (Srinivasan et al., 2021) or directly from the article.
The final dataset has approximately ∼312K unique images included in the relevant articles.
Image Bank.We combine OpenStax Books images and the Wikipedia images to form the Image Bank.Our objective is to retrieve and assign relevant images from the Image Bank to each section that is present in the OpenStax Books dataset.

Dataset Analysis
We profile the dataset according to a series of questions, which will inform the problem formulation.
Q1. How are concepts distributed?The patterns in concept mentions are similar across subjects (Figure 2b).The distribution of concepts within subsections (Figure 2a) reveals an average of 5.6 concepts per subsection.An average concept is mentioned 2.7 times within a subsection (Figure 2b) -that is, concepts are infrequently mentioned in the subsection.Notably, each section concept is mentioned in only 1.7 subsections on average (Figure 2c), emphasizing the high localization of concepts to specific subsections.
Q2.What influences the number of assigned images in a subsection?On average each section consists of 5.5 subsections and 3.3 assigned images (Table 1).To answer the question at hand, we conduct a regression analysis with the number of images in a subsection as the predicted variable, and the following as features: • concepts/words/paragraphs: their total # in the subsection.• concepts_uniq: # of unique concepts mentioned in the subsection.• %sec_concepts: % of unique concepts from the section in the subsection.• %sec_concept: % of total concept mentions from the section in the subsection.• %sec_words: % of total words in the section which are in the subsection.• %sec_paragraphs: % of paragraphs in the section in the subsection.• position of the subsection in the section from 0 (beginning) to 1 (end).• subject of the book.
Based on the results in Table 3, the number of assigned images to a subsection can be best predicted from total number of concepts, words, and paragraphs of the subsection.Unexpectedly, this is not true for the number of unique concepts.Furthermore, the position of the subsection within the section is negatively correlated to the image count -that is, the subsections located later in the section have fewer images.The subject of the book also impacts the subsection's image count, with differing coefficients for each subject.Overall, the regression model yields Pearson correlation of 0.59 with p<10 −4 -a high degree of predictability.
Q3. Are images exclusive to the assigned subsection?We use CLIP's similarity scores for imagephrase relevance (detailed in Section 4).For each image in the textbook we distinguish between the  present subsection and the one before and after.
Then, we assign images to the subsection with the highest-matching phrase.The percentages of mostsimilar images in the before and after subsections is nearly equal (Figure 3), which is contrary to the intuition that the subsections after the current one would refer to the concepts in the image.The difference between the before/after and present subsections is the greatest in the business category, indicating uniqueness of images assigned to a particular subsection compared.Such uniqueness, as determined by the CLIP scores, is most absent in mathematics books.Overall, the images do not exclusively best-match the phrases from the goldassigned subsection.
Q4. Are concept mentions associated with assigned images?We rank subsection phrases using CLIP similarity scores with subsection images.We use this ranking to calculate the percentage of concepts that were mentioned in the top-similar phrases associated with the gold images in the subsection.This way, we evaluate whether textphrases with higher association to the gold-image also had more concept mentions.Indeed, there is a correspondence between the gold images and phrase with concept mentions (Figure 4).This warrants further usage of CLIP scores as a measure for matching concepts to images.

Image Retrieval and Assignment
We first describe an image retrieval model and then formalize our task and the optimization approach.CLIP (Radford et al., 2021) is a stateof-the-art vision-language model trained on many image-caption pairs from the web by maximizing the dot-product similarity the image and caption encodings.We further fine-tune CLIP on image-text pairs from the OpenStax Books dataset.

Image Assignment Formulation
Our formulation focuses on the assignment of images to subsections. 4We begin with notation: We decompose the text of a subsection into phrases using a sliding window approach.These phrases may mention a particular concept: For a fair comparison, we assign the same number of images to each subsection as in the gold assignment.This can also be automated with the image count prediction (Section 3.2/Q2).

Local Assignment
The most straightforward solution is to select an image for each subsection independently by maximizing the subsection text-image similarity.Specifically, we assign each subsection u, with an image i ∈ I, which maximises the following function: Here, I denotes set of all the images in the image bank.Moreover, sim(i, t) denotes probability (normalised dot-product similarity across images) of any image i and certain phrase t as given by the fine-tuned CLIP model.While local assignment is fast and simple, our qualitative analysis reveals that it lacks global coherence and may assign images depicting overlapping concepts to the same section.For example, if every subsection mentions the concept "molecule", then all subsections can be assigned the same image of a molecule.This finding aligns with our previous results (Section 3.2/Q3) and is supported by the redundancy metrics in Section 5.

Global Assignment
The analysis in Section 3.2 revealed that the mostrelevant phrases for gold images are not restricted to the assigned subsection and that the concepts are localized within their respective sections.Therefore, for better global coherence in our assignments, we assign images based on concepts rather than phrases.Specifically, we select a subset of images that covers most of the concepts (coverage) while avoiding overlaps (redundancy).To define coverage and redundancy functions for concepts in a section, we first define a boolean function for image, i, covering a concept, c, as follows: Next, we formalize the coverage and redundancy.
Coverage.Coverage for a section s and a subset of images I ′ is the number of unique concepts in section s which are covered by images in I ′ : Redundancy.Redundancy of a section s and a a set of images I ′ is the total number of times concepts in s are multiply covered: We now introduce the concept of set submodularity which will be necessary for proving approximation bounds of the optimization.
Definition 4.1.A function f is said to be set submodular if and only if ∀A ⊆ B ∀x : Informally, the function yields diminishing returns for item x.Proof.Let C ′ and C ′′ be sets of concepts from a subsection covered by images in I ′ and I ′′ respectively.As per the definition of C, we have C ′′ ⊆ C ′ .Let C i be the concepts covered by image i.
Theorem 4.3.The negative redundancy function −R is set submodular.That is for I ′′ such that Proof.Similarly to Theorem 4.2, for redundancy function R we observe that: Observation 4.4.Both C and R are monotone.
For the global assignment, we choose images I ′ ⊆ I such that the following is maximised: G (Equation 13) is a submodular function because it is a sum of two submodular functions.Finding the optimal solution to G is NP-hard (Lovász, 1983).However, since G is a submodular function, greedy algorithm can lead to fairly good 1 − 1 /e ≈ 63%-approximation of optimisation of G under cardinality constraints |I ′ | ≤ B, where B is the budget (Nemhauser et al., 1978).Once images I ′ are greedily computed for a section, we assign an image i ∈ I ′ to the subsection, u, which maximises C({i}, u); to the subsection in which the image i covers the most concepts.

Joint Assignment
The local assignment captures relevance while the global one also captures redundancy.To optimize both of them, we formulate the following objective with a trade-off hyper-parameter β ≥ 0: Note that J is a submodular function, since it is the sum of two other submodular functions: β • G and S. The former is submodular due to the nonnegativity of β and previously proven submodularity of G.The submodularity of S is proven below.

S(I
Considering the submodularity of J, we select images greedily, similarly to optimizing G. Once a set of images, I ′ , is greedily computed for a section, we assign an image i ∈ I ′ to the subsection, u, which maximises S(I ′ , u) + β • C(I ′ , u); to the subsection in which the image i covers the most concepts and has most similar text.Note that the local and global assignments are specific cases of this formulation.This formulation achieves our desideratum -images are assigned to specific subsections also with global context consideration.

Human Evaluation
The goal of improving textbooks is to help students learn.Testing a wide range of models directly by monitoring learning progress would require a very expensive long-term evaluation.Instead we turn to an intrinsic crowd-sourced evaluation where we ask teachers what they think about the qualities of the assignment.

Setup
We selected 32 crowd-workers from Prolific who are native English speakers and work in education.We compare 4 different assignments: the gold one by a human and three automatic ones (Section 4).Each participant is assigned a single section and evaluates all 4 systems on this section.This methodology may cause an unwanted priming effect, which we address in Section 5.2.We chose this setup deliberately because a lot of the annotation time is spent on reading the section text and we wanted to share this cost by annotating multiple assignments at once.Our evaluation consists of close-ended (limited number of answers) questions that pertain to both the local (subsection) and global (section) textual context (full annotation guidelines are in Appendix A): Local evaluation: • Is this image relevant to educational concepts described in this subsection text?• Is this image redundant compared to previous images in this subsection?• What is the type of this image?

Global evaluation:
• Is this image relevant to educational concepts described in this section?• Is this image redundant compared to previous images in this section?• Is this image didactically useful for explaining this section text?
The annotators first answer the local questions for all images and then the global questions.This way we made sure that they scanned the entire section and had a some overview of all images and how they relate to each other before answering the Rolling Motion with Slipping.
In the case of rolling motion with slipping, we must use the coefficient of kinetic friction, which gives rise to the kinetic friction force since static friction is not present.The situation is In the case of slipping, , because point P on the wheel is not at rest on the surface, and .Thus, .
Is this image relevant to educational concepts described in this subsection text?Is this image relevant to educational concepts described in this section?Note that, the image featured is not a suitable choice to illustrate this particular subsection.
global questions.The annotation pipeline (for 4 assignments, local/global) is shown in Figure 6 and the user-interface in Figure 5.

Evaluation Results
We first verify the evaluation setup validity by checking whether the position in the evaluation queue has an effect on the evaluation scores.Recall that the annotators were shown the same textbook section with 4 alternate images assignments.While Figure 7 shows some variance along the independent variable of evaluation position, the differences are not significant, justifying our evaluation setup.
We then focus on two most important evaluation criteria: relevancy and redundancy.The results for those, shown in Figure 8, clearly show preference for the Gold image assignment, suggesting that automatic assignment is still inferior to humans.From the automatic methods, the Local performs the best relevancy-wise.However, it is outperformed by Joint with respect to redundancy.We also note that there is very little difference between Local and Global evaluation category.This may be caused by evaluation bias (i.e.annotators are likely to give similar score for both local and global questions).Next, we examine all the remaining evaluation categories in Table 2.While Gold is the best across all, the significance of the difference varies.The Joint assignment is never the worst, suggesting it to be a robust choice.
Figure 9: Category-wise gold and automatic image assignment scores (0-9); higher scores indicate better performance.In each cell, the top-left corner displays the score for the gold assignment, while the bottom-right displays the score for the automatic assignment (aggregated across all methods).

Qualitative Analysis
The Joint optimization method aims to reduce the reader's cognitive load compared to the Local method that aggregates scores from all text-phrases.The Local method results in repetitive covering of a single concept from the section with top-images.
In contrast, Joint and Global assign images covering a wider range and variety of concepts in the section, enabling a greater level of text enrichment.Appendix B shows examples of assignments by these approaches.We remark that structured image types, such as graphs, multiple images, or those that are less identifiable, receive lower ratings systematically (Figure 9).We now elaborate on the two major limitations in our models.
Varied domain of images.One limitation of our approach is demonstrated in Figure 10 where it struggles to model non-natural images such as diagrams, graphs, and plots.These images often rep-resent abstract concepts, relationships, and events which cannot be well modeled by models like CLIP that are majorly trained on natural images.
Long textual description of concepts.Another source of error was that some concepts had long textual descriptions.For example, the description of Stokes' theorem spans multiple paragraphs.Learning to associate image with a part of text may lead to loose and spurious associations, resulting in poor downstream assignment performance.This highlights the need for vision-language representations that can effectively model long text descriptions and establish better image-text associations.

Conclusion and Future Work
We presented a dataset and a new task of enriching textbooks with visuals from the web.We proposed several technical solutions for this problem using neural image retrievers combined with a new assignment optimization setup.Annotations by workers in the education industry verified that, even though the human assignment is still of the highest quality, the automatic assignments are not far behind.There are multiple venues for making further progress on this problem.First, individual concept importances and text-image relevance models could be improved and plugged into the existing algorithms.The varying domains of images and lengthy textual descriptions of concepts present challenges that could pave the way for exploring new approaches for learning image-text associations.Professional textbook designers can be included to further refine the assignment optimization objective and pose this as a human AI collaboration problem.

Limitations and Ethics Statement
While automatically enhancing textbooks with images holds promise, we point out: • Image selection bias: Images from the web are at risk of being biased because they do not necessarily come from the same distribution as textbook graphics.However, images from Wikipedia are possibly more suitable for this purpose because they are of encyclopedic nature.• Intellectual property: Practitioners who use our automatic image assignment method for textbooks should take care to always follow the associated copyrights and attributions.• Pedagogical usefulness: While we employed workers to intrinsically judge the quality of the assignments, the results should be replicated with an extrinsic evaluation (beyond the scope of our study) which also considers the impact on student learning and information retention.• Quality control: The target audience of textbooks are students, who are a sensitive group.
In the current formulation, the optimization will always produce some assignment but there is no mechanism for quality assurance.This could result in inappropriate images being assigned and expert human scrutiny should be employed.
-Subsection "Mammalian Systems" -Fig.17   Figure 13: Image assignments for the "Pathways to Engagement" subsection in "Engagement in a Democracy" section of textbook "American Government 3e".
Figure 14: Image assignments for the "Factors of Engagement" subsection in "Engagement in a Democracy" section of the textbook "American Government 3e".
Figure 15: Image assignments for the "Interpretation of Curl" subsection in the "Stokes' Theorem" section of the textbook "Calculus Volume 3".
Figure 16: Image assignment for the "Learning Objectives" subsection in the "Systems of Gas Exchange" section of the textbook "Biology 2e".
Figure 17: Image assignment for the "Mammalian System" subsection in the "Systems of Gas Exchange" section of the textbook "Biology 2e".
Figure 18: Image assignments for the "Stokes' Theorem" subsection in the "Stokes' Theorem" section of the textbook "Calculus Volume 3".
Figure 19: Image assignments for the "Direct Diffusion" subsection in the "Systems of Gas Exchange" section of the textbook "Biology 2e".
Figure 20: Image assignments for the "Skin and Gills" subsection in the "Systems of Gas Exchange" section of the textbook "Biology 2e".

C CLIP Fine-tuning
The CLIP model is composed of two parts: (1) the Image Encoder responsible for encoding the p th input image into a 512-dimensional vector I p ; and (2) the Text Encoder that encodes the q th input text into a 512-dimensional vector T q .The model is trained with contrastive loss on a dataset of 400 million image-text pairs from the web.During the training, relevant image-caption pairs maximize I p •T q while I p •T q is minimize for unrelated I p ′ and T q ′ .During our fine-tuning, we create mini-batches of image-text pairs extracted from the subsections in OpenStax Books Dataset.
During inference, we encode all images in the Image Bank as {I 1 , I 2 , . . ., I N } and all textqueries belonging to a particular subsection as {T 1 , T 2 , . . ., T M }.Next, we get similarity scores for all images in the Image Bank with respect to the k th text-query by calculating S k = ⟨I 1 • T k , I 2 • T k , . . ., I N • T k ⟩.We then normalize S k to get P k = SOFTMAX(S k ).Finally, for each subsection, we compute the relevance scores of all images in the Image Bank by aggregating the P k values, resulting in P = AGG(P 1 , P 2 , . . ., P M ), where AGG is an aggregate function such as the mean of N -dimensional vectors.
In search for the best image retrieval model, we provide details for various methods of creating image-text pairs from a subsection.

C.1 Evaluation Metrics
We now explain the techniques for the retrieval model evaluation.We consider the images initially assigned in the same subsection as gold-images for that particular subsection.Therefore, for each subsection, the Image Bank is categorized into goldimages and not gold-images.To gauge the retrieval quality of a subsection from a given retrieval approach, we use these metrics.Ultimately we use the average of all of them.
• Recall@K: fraction of gold-images retrieved in top-K retrievals.• Recall@R: fraction of gold-images retrieved in top-R retrievals; where R is the number of goldimages in the subsection.• Precision@K: fraction of retrieved images (K in total) that are gold-images.• Precision@R: fraction of retrieved images (R in total) which are gold-images; where R is the number of gold-images in the subsection.• Mean Gold Rank: average rank of each gold image given relevancy sorting.

C.2 Zero-Shot CLIP
In the experiments discussed of this section, we use a pre-trained CLIP (without fine-tuning) to fetch relevant images for each subsection.The experiment results are presented in Table 4.For inference, we adopt a similar approach to the one described earlier.Below, we elaborate on the main differences in experimental settings for these studies: 1. Concepts: We use all the concepts from a subsection as its text-queries (each concept is a separate text-query) and use mean for AGG. 2. Clustered-Concepts: We use all the concepts from a subsection as its text-queries (each concept is a separate text-query).Next, we form clusters of text queries using k-means (k = 10) clustering on text encodings.For inference, we apply AGG on the relevance scores P to textqueries belonging to each cluster.Using the cluster's aggregated relevance score, we retrieve one image at a time from each cluster in a roundrobin fashion.This experiment tests if giving equal importance to various "concepts-clusters" leads to increased variation in retrieved results and better performance.3. Concatenated-Concepts: We concatenate different concepts from a subsection together and use these concatenated phrases as the textqueries for each section and use mean as the agg function.This experiment tests if giving more context (multiple terms provide better con-

Figure 1 :
Figure 1: Illustration of good and bad image assignments.The first subsection does not require an image.The second subsection in the bad assignment has a related image but without a strong connection.Likewise, the last subsection has a picture of Calvin, which is related but does not have a high pedagogical value.
# subsections with a concept mention.

Figure 2 :Figure 3 :
Figure2: Distribution of concept-mentions.On average: (a) a subsection has 5.6 distinct concepts; (b) a concept is mentioned 2.7 times within a subsection; and (c) a concept from a section is mentioned in 1.7 of its subsections.

Figure 4 :
Figure4: Proportion of concepts mentioned in top-k most similar text-phrases to the gold-set of images in a subsection.On average for each subsection, 50% concepts are mentioned in the top-3 most similar textphrases with the gold images.
Theorem 4.5.The local assignment function S is set submodular.That is, for a set of images I ′ and I ′′ such that I ′′ ⊆ I ′ ⊆ I and any image i ∈ I − I ′ it holds that S(I ′′ ∪ {i}) − S(I ′′ ) ≥ S(I ′ ∪ {i}) − S(I ′ ).Proof.

Figure 5 :
Figure5: Annotation window for a single image within a subsection (there may be multiple images in a subsection).Note that, the image featured is not a suitable choice to illustrate this particular subsection.

Figure 6 :Figure 7 :
Figure6: The annotator is presented with the same section but with randomized image assignment order.

Figure 10 :
Figure 10: Our approach assigns left and right images to Stokes' Theorem and Factors Affecting Engagement in Democracy.In both cases, image is not specific enough and only very loosely relevant to the subsection text.This underscores the weakness of our model, which learns relevance with limited textual context.

Figure 11 :
Figure11: Image assignments for the "Why Get Involved?"subsection in "Engagement in a Democracy" section of the textbook "American Government 3e".

Figure 12 :
Figure 12: Image assignments for the "The Advantages of Corporate Status" subsection in the "Corporate Law and Corporate Responsibility" section of the textbook "Business Ethics".

Figure 21 :
Figure 21: Image assignments for the "Balancing the Many Responsibilities of a Corporation" subsection in the "Corporate Law and Corporate Responsibility" section of the textbook "Business Ethics".

Figure 22 :
Figure 22: Image assignments for the "The Two Sides of the Corporate Responsibility Debate" subsection in the "Corporate Law and Corporate Responsibility" section of the textbook "Business Ethics".

Figure 23 :
Figure 23: Image assignments for the "Tracheal Systems" subsection in the "Systems of Gas Exchange" section of the textbook "Biology 2e".

Figure 24 :
Figure24: Image assignments for the "Lungs: Bronchi and Alveoli" subsection in the "Systems of Gas Exchange" section of the textbook "Biology 2e".

Figure 25 :
Figure 25: Image assignments for the "Stokes' Theorem Proof" subsection in the "Stokes' Theorem" section of the textbook "Calculus Volume 3".