Eye On AI

View Original

Week Ending 2.25.2024

RESEARCH WATCH: 2.25.2024

Computer Vision for Multimedia Geolocation in Human Trafficking Investigation: A Systematic Literature Review

The paper provides a literature review on using computer vision techniques like deep learning for determining the geographic location of multimedia related to human trafficking investigations. This could help investigators more quickly locate victims and perpetrators.

Authors:  Opeyemi Bamigbade, John Sheppard, Mark Scanlon

Link:  https://arxiv.org/abs/2402.15448v1

Date: 2024-02-23

Summary:

The task of multimedia geolocation is becoming an increasingly essential component of the digital forensics toolkit to effectively combat human trafficking, child sexual exploitation, and other illegal acts. Typically, metadata-based geolocation information is stripped when multimedia content is shared via instant messaging and social media. The intricacy of geolocating, geotagging, or finding geographical clues in this content is often overly burdensome for investigators. Recent research has shown that contemporary advancements in artificial intelligence, specifically computer vision and deep learning, show significant promise towards expediting the multimedia geolocation task. This systematic literature review thoroughly examines the state-of-the-art leveraging computer vision techniques for multimedia geolocation and assesses their potential to expedite human trafficking investigation. This includes a comprehensive overview of the application of computer vision-based approaches to multimedia geolocation, identifies their applicability in combating human trafficking, and highlights the potential implications of enhanced multimedia geolocation for prosecuting human trafficking. 123 articles inform this systematic literature review. The findings suggest numerous potential paths for future impactful research on the subject.

--------------------------------------------------------------------------------------------------------

A Data-Centric Approach To Generate Faithful and High Quality Patient Summaries with Large Language Models

The paper explores improving patient summaries from doctor's notes using large language models. Well-summarized records could vastly improve patient understanding and healthcare delivery.

Authors:  Stefan Hegselmann, Shannon Zejiang Shen, Florian Gierse, Monica Agrawal, David Sontag, Xiaoyi Jiang

Link:  https://arxiv.org/abs/2402.15422v1

Date: 2024-02-23

Summary:

Patients often face difficulties in understanding their hospitalizations, while healthcare workers have limited resources to provide explanations. In this work, we investigate the potential of large language models to generate patient summaries based on doctors' notes and study the effect of training data on the faithfulness and quality of the generated summaries. To this end, we develop a rigorous labeling protocol for hallucinations, and have two medical experts annotate 100 real-world summaries and 100 generated summaries. We show that fine-tuning on hallucination-free data effectively reduces hallucinations from 2.60 to 1.55 per summary for Llama 2, while preserving relevant information. Although the effect is still present, it is much smaller for GPT-4 when prompted with five examples (0.70 to 0.40). We also conduct a qualitative evaluation using hallucination-free and improved training data. GPT-4 shows very good results even in the zero-shot setting. We find that common quantitative metrics do not correlate well with faithfulness and quality. Finally, we test GPT-4 for automatic hallucination detection, which yields promising results.

--------------------------------------------------------------------------------------------------------

On Minimal Depth in Neural Networks

The paper analyzes the theoretical minimal depth, or layers, required for neural networks to represent certain functions. Understanding representations could advance neural network design.

Authors:  Juan L. Valerdi

Link:  https://arxiv.org/abs/2402.15315v1

Date: 2024-02-23

Summary:

A characterization of the representability of neural networks is relevant to comprehend their success in artificial intelligence. This study investigate two topics on ReLU neural network expressivity and their connection with a conjecture related to the minimum depth required for representing any continuous piecewise linear function (CPWL). The topics are the minimal depth representation of the sum and max operations, as well as the exploration of polytope neural networks. For the sum operation, we establish a sufficient condition on the minimal depth of the operands to find the minimal depth of the operation. In contrast, regarding the max operation, a comprehensive set of examples is presented, demonstrating that no sufficient conditions, depending solely on the depth of the operands, would imply a minimal depth for the operation. The study also examine the minimal depth relationship between convex CPWL functions. On polytope neural networks, we investigate several fundamental properties, deriving results equivalent to those of ReLU networks, such as depth inclusions and depth computation from vertices. Notably, we compute the minimal depth of simplices, which is strictly related to the minimal depth conjecture in ReLU networks.

--------------------------------------------------------------------------------------------------------

Representing Online Handwriting for Recognition in Large Vision-Language Models

The paper proposes representing online handwriting input for recognition by large vision-language models. This could significantly advance tablet interfaces for search, indexing, and other applications.

Authors:  Anastasiia Fadeeva, Philippe Schlattner, Andrii Maksai, Mark Collier, Efi Kokiopoulou, Jesse Berent, Claudiu Musat

Link:  https://arxiv.org/abs/2402.15307v1

Date: 2024-02-23

Summary:

The adoption of tablets with touchscreens and styluses is increasing, and a key feature is converting handwriting to text, enabling search, indexing, and AI assistance. Meanwhile, vision-language models (VLMs) are now the go-to solution for image understanding, thanks to both their state-of-the-art performance across a variety of tasks and the simplicity of a unified approach to training, fine-tuning, and inference. While VLMs obtain high performance on image-based tasks, they perform poorly on handwriting recognition when applied naively, i.e., by rendering handwriting as an image and performing optical character recognition (OCR). In this paper, we study online handwriting recognition with VLMs, going beyond naive OCR. We propose a novel tokenized representation of digital ink (online handwriting) that includes both a time-ordered sequence of strokes as text, and as image. We show that this representation yields results comparable to or better than state-of-the-art online handwriting recognizers. Wide applicability is shown through results with two different VLM families, on multiple public datasets. Our approach can be applied to off-the-shelf VLMs, does not require any changes in their architecture, and can be used in both fine-tuning and parameter-efficient tuning. We perform a detailed ablation study to identify the key elements of the proposed representation.

--------------------------------------------------------------------------------------------------------

Optimal Transport for Structure Learning Under Missing Data

The paper develops a method using optimal transport to learn causal graphs from data even with missing values. Causal discovery amid missing data could benefit analyses in healthcare, social sciences, and other important domains.

Authors:  Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung

Link:  https://arxiv.org/abs/2402.15255v1

Date: 2024-02-23

Summary:

Causal discovery in the presence of missing data introduces a chicken-and-egg dilemma. While the goal is to recover the true causal structure, robust imputation requires considering the dependencies or preferably causal relations among variables. Merely filling in missing values with existing imputation methods and subsequently applying structure learning on the complete data is empirical shown to be sub-optimal. To this end, we propose in this paper a score-based algorithm, based on optimal transport, for learning causal structure from missing data. This optimal transport viewpoint diverges from existing score-based approaches that are dominantly based on EM. We project structure learning as a density fitting problem, where the goal is to find the causal model that induces a distribution of minimum Wasserstein distance with the distribution over the observed data. Through extensive simulations and real-data experiments, our framework is shown to recover the true causal graphs more effectively than the baselines in various simulations and real-data experiments. Empirical evidences also demonstrate the superior scalability of our approach, along with the flexibility to incorporate any off-the-shelf causal discovery methods for complete data.

--------------------------------------------------------------------------------------------------------

Artificial Bee Colony optimization of Deep Convolutional Neural Networks in the context of Biomedical Imaging

The paper optimizes deep convolutional neural networks using evolutionary techniques for enhanced performance in biomedical imaging tasks like MRI analysis. Advancing biomedical imaging with AI can transform disease screening and treatment planning.

Authors:  Adri Gomez Martin, Carlos Fernandez del Cerro, Monica Abella Garcia, Manuel Desco Menendez

Link:  https://arxiv.org/abs/2402.15246v1

Date: 2024-02-23

Summary:

Most efforts in Computer Vision focus on natural images or artwork, which differ significantly both in size and contents from the kind of data biomedical image processing deals with. Thus, Transfer Learning models often prove themselves suboptimal for these tasks, even after manual finetuning. The development of architectures from scratch is oftentimes unfeasible due to the vastness of the hyperparameter space and a shortage of time, computational resources and Deep Learning experts in most biomedical research laboratories. An alternative to manually defining the models is the use of Neuroevolution, which employs metaheuristic techniques to optimize Deep Learning architectures. However, many algorithms proposed in the neuroevolutive literature are either too unreliable or limited to a small, predefined region of the hyperparameter space. To overcome these shortcomings, we propose the Chimera Algorithm, a novel, hybrid neuroevolutive algorithm that integrates the Artificial Bee Colony Algorithm with Evolutionary Computation tools to generate models from scratch, as well as to refine a given previous architecture to better fit the task at hand. The Chimera Algorithm has been validated with two datasets of natural and medical images, producing models that surpassed the performance of those coming from Transfer Learning.

--------------------------------------------------------------------------------------------------------

GraphEdit: Large Language Models for Graph Structure Learning

The paper proposes using large language models to learn graph structures from data, overcoming issues with relying on explicit supervision. This could advance network analysis across domains like social networks, biology, and more.

Authors:  Zirui Guo, Lianghao Xia, Yanhua Yu, Yuling Wang, Zixuan Yang, Wei Wei, Liang Pang, Tat-Seng Chua, Chao Huang

Link:  https://arxiv.org/abs/2402.15183v1

Date: 2024-02-23

Summary:

Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data by generating novel graph structures. Graph Neural Networks (GNNs) have emerged as promising GSL solutions, utilizing recursive message passing to encode node-wise inter-dependencies. However, many existing GSL methods heavily depend on explicit graph structural information as supervision signals, leaving them susceptible to challenges such as data noise and sparsity. In this work, we propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data. By enhancing the reasoning capabilities of LLMs through instruction-tuning over graph structures, we aim to overcome the limitations associated with explicit graph structural information and enhance the reliability of graph structure learning. Our approach not only effectively denoises noisy connections but also identifies node-wise dependencies from a global perspective, providing a comprehensive understanding of the graph structure. We conduct extensive experiments on multiple benchmark datasets to demonstrate the effectiveness and robustness of GraphEdit across various settings. We have made our model implementation available at: https://github.com/HKUDS/GraphEdit.

--------------------------------------------------------------------------------------------------------

KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models

The paper provides a new interactive evaluation framework for large language models to address inflated performance assessments from contaminated data. Better evaluation enables progress in language model training for real-world use.

Authors:  Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Wei Ye, Jindong Wang, Xing Xie, Yue Zhang, Shikun Zhang

Link:  https://arxiv.org/abs/2402.15043v1

Date: 2024-02-23

Summary:

Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness. Existing strategies, which aim to detect contaminated texts, focus on quantifying contamination status instead of accurately gauging model performance. In this paper, we introduce KIEval, a Knowledge-grounded Interactive Evaluation framework, which incorporates an LLM-powered "interactor" role for the first time to accomplish a dynamic contamination-resilient evaluation. Starting with a question in a conventional LLM benchmark involving domain-specific knowledge, KIEval utilizes dynamically generated, multi-round, and knowledge-focused dialogues to determine whether a model's response is merely a recall of benchmark answers or demonstrates a deep comprehension to apply knowledge in more complex conversations. Extensive experiments on seven leading LLMs across five datasets validate KIEval's effectiveness and generalization. We also reveal that data contamination brings no contribution or even negative effect to models' real-world applicability and understanding, and existing contamination detection methods for LLMs can only identify contamination in pre-training but not during supervised fine-tuning.

--------------------------------------------------------------------------------------------------------

Ar-Spider: Text-to-SQL in Arabic

The paper introduces the first Arabic text-to-SQL dataset for translating text queries to SQL. Advancing multilingual semantic parsing facilitates more natural database access and augments human productivity.

Authors:  Saleh Almohaimeed, Saad Almohaimeed, Mansour Al Ghanim, Liqiang Wang

Link:  https://arxiv.org/abs/2402.15012v1

Date: 2024-02-22

Summary:

In Natural Language Processing (NLP), one of the most important tasks is text-to-SQL semantic parsing, which focuses on enabling users to interact with the database in a more natural manner. In recent years, text-to-SQL has made significant progress, but most were English-centric. In this paper, we introduce Ar-Spider 1, the first Arabic cross-domain text-to-SQL dataset. Due to the unique nature of the language, two major challenges have been encountered, namely schema linguistic and SQL structural challenges. In order to handle these issues and conduct the experiments, we adopt two baseline models LGESQL [4] and S2SQL [12], both of which are tested with two cross-lingual models to alleviate the effects of schema linguistic and SQL structure linking challenges. The baselines demonstrate decent single-language performance on our Arabic text-to-SQL dataset, Ar-Spider, achieving 62.48% for S2SQL and 65.57% for LGESQL, only 8.79% below the highest results achieved by the baselines when trained in English dataset. To achieve better performance on Arabic text-to-SQL, we propose the context similarity relationship (CSR) approach, which results in a significant increase in the overall performance of about 1.52% for S2SQL and 1.06% for LGESQL and closes the gap between Arabic and English languages to 7.73%.

--------------------------------------------------------------------------------------------------------

Demographic Bias of Expert-Level Vision-Language Foundation Models in Medical Imaging

The paper reveals biases in medical imaging AI models that could exacerbate disparities in healthcare provision across demographics. Understanding and addressing these biases is crucial for equitable, safe AI deployment in medicine.

Authors:  Yuzhe Yang, Yujia Liu, Xin Liu, Avanti Gulhane, Domenico Mastrodicasa, Wei Wu, Edward J Wang, Dushyant W Sahani, Shwetak Patel

Link:  https://arxiv.org/abs/2402.14815v1

Date: 2024-02-22

Summary:

Advances in artificial intelligence (AI) have achieved expert-level performance in medical imaging applications. Notably, self-supervised vision-language foundation models can detect a broad spectrum of pathologies without relying on explicit training annotations. However, it is crucial to ensure that these AI models do not mirror or amplify human biases, thereby disadvantaging historically marginalized groups such as females or Black patients. The manifestation of such biases could systematically delay essential medical care for certain patient subgroups. In this study, we investigate the algorithmic fairness of state-of-the-art vision-language foundation models in chest X-ray diagnosis across five globally-sourced datasets. Our findings reveal that compared to board-certified radiologists, these foundation models consistently underdiagnose marginalized groups, with even higher rates seen in intersectional subgroups, such as Black female patients. Such demographic biases present over a wide range of pathologies and demographic attributes. Further analysis of the model embedding uncovers its significant encoding of demographic information. Deploying AI systems with these biases in medical imaging can intensify pre-existing care disparities, posing potential challenges to equitable healthcare access and raising ethical questions about their clinical application.

--------------------------------------------------------------------------------------------------------

CriticBench: Benchmarking LLMs for Critique-Correct Reasoning

The paper provides a benchmark to assess large language models' abilities to critique and correct their own reasoning across mathematical, commonsense, and other tasks. This capability would vastly expand the reliability and scope of language model applications.

Authors:  Zicheng Lin, Zhibin Gou, Tian Liang, Ruilin Luo, Haowei Liu, Yujiu Yang

Link:  https://arxiv.org/abs/2402.14809v1

Date: 2024-02-22

Summary:

The ability of Large Language Models (LLMs) to critique and refine their reasoning is crucial for their application in evaluation, feedback provision, and self-improvement. This paper introduces CriticBench, a comprehensive benchmark designed to assess LLMs' abilities to critique and rectify their reasoning across a variety of tasks. CriticBench encompasses five reasoning domains: mathematical, commonsense, symbolic, coding, and algorithmic. It compiles 15 datasets and incorporates responses from three LLM families. Utilizing CriticBench, we evaluate and dissect the performance of 17 LLMs in generation, critique, and correction reasoning, i.e., GQC reasoning. Our findings reveal: (1) a linear relationship in GQC capabilities, with critique-focused training markedly enhancing performance; (2) a task-dependent variation in correction effectiveness, with logic-oriented tasks being more amenable to correction; (3) GQC knowledge inconsistencies that decrease as model size increases; and (4) an intriguing inter-model critiquing dynamic, where stronger models are better at critiquing weaker ones, while weaker models can surprisingly surpass stronger ones in their self-critique. We hope these insights into the nuanced critique-correct reasoning of LLMs will foster further research in LLM critique and self-improvement.

--------------------------------------------------------------------------------------------------------

Zero-shot cross-lingual transfer in instruction tuning of large language model

The paper studies cross-lingual transfer in instruction tuning of large language models, which could enable much broader accessibility and usability of models across languages.

Authors:  Nadezhda Chirkova, Vassilina Nikoulina

Link:  https://arxiv.org/abs/2402.14778v1

Date: 2024-02-22

Summary:

Instruction tuning (IT) is widely used to teach pretrained large language models (LLMs) to follow arbitrary instructions, but is under-studied in multilingual settings. In this work, we conduct a systematic study of zero-shot cross-lingual transfer in IT, when an LLM is instruction-tuned on English-only data and then tested on user prompts in other languages. We investigate the influence of model configuration choices and devise a multi-facet evaluation strategy for multilingual instruction following. We find that cross-lingual transfer does happen successfully in IT even if all stages of model training are English-centric, but only if multiliguality is taken into account in hyperparameter tuning and with large enough IT data. English-trained LLMs are capable of generating correct-language, comprehensive and helpful responses in the other languages, but suffer from low factuality and may occasionally have fluency errors.

--------------------------------------------------------------------------------------------------------

Generalising realisability in statistical learning theory under epistemic uncertainty

The paper examines how concepts like realisability from statistical learning theory generalize under uncertainty between distributions. This could lead to more robust learning algorithms.

Authors:  Fabio Cuzzolin

Link:  https://arxiv.org/abs/2402.14759v1

Date: 2024-02-22

Summary:

The purpose of this paper is to look into how central notions in statistical learning theory, such as realisability, generalise under the assumption that train and test distribution are issued from the same credal set, i.e., a convex set of probability distributions. This can be considered as a first step towards a more general treatment of statistical learning under epistemic uncertainty.

--------------------------------------------------------------------------------------------------------

A Usage-centric Take on Intent Understanding in E-Commerce

The paper proposes a new perspective on intent understanding in e-commerce centered on product usage, which could improve recommendation diversity and relevance.

Authors:  Wendi Zhou, Tianyi Li, Pavlos Vougiouklis, Mark Steedman, Jeff Z. Pan

Link:  https://arxiv.org/abs/2402.14901v1

Date: 2024-02-22

Summary:

Identifying and understanding user intents is a pivotal task for E-Commerce. Despite its popularity, intent understanding has not been consistently defined or accurately benchmarked. In this paper, we focus on predicative user intents as "how a customer uses a product", and pose intent understanding as a natural language reasoning task, independent of product ontologies. We identify two weaknesses of FolkScope, the SOTA E-Commerce Intent Knowledge Graph, that limit its capacity to reason about user intents and to recommend diverse useful products. Following these observations, we introduce a Product Recovery Benchmark including a novel evaluation framework and an example dataset. We further validate the above FolkScope weaknesses on this benchmark.

--------------------------------------------------------------------------------------------------------

Incorporating Expert Rules into Neural Networks in the Framework of Concept-Based Learning

The paper combines logical rules and neural networks for concept-based learning. Incorporating more structured knowledge into models could enhance reasoning while preserving interpretability.

Authors:  Andrei V. Konstantinov, Lev V. Utkin

Link:  https://arxiv.org/abs/2402.14726v1

Date: 2024-02-22

Summary:

A problem of incorporating the expert rules into machine learning models for extending the concept-based learning is formulated in the paper. It is proposed how to combine logical rules and neural networks predicting the concept probabilities. The first idea behind the combination is to form constraints for a joint probability distribution over all combinations of concept values to satisfy the expert rules. The second idea is to represent a feasible set of probability distributions in the form of a convex polytope and to use its vertices or faces. We provide several approaches for solving the stated problem and for training neural networks which guarantee that the output probabilities of concepts would not violate the expert rules. The solution of the problem can be viewed as a way for combining the inductive and deductive learning. Expert rules are used in a broader sense when any logical function that connects concepts and class labels or just concepts with each other can be regarded as a rule. This feature significantly expands the class of the proposed results. Numerical examples illustrate the approaches. The code of proposed algorithms is publicly available.

--------------------------------------------------------------------------------------------------------

IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus

The paper constructs a large-scale information extraction dataset to enhance language model capabilities, which could significantly advance information search, knowledge base construction, and other applications.

Authors:  Honghao Gui, Hongbin Ye, Lin Yuan, Ningyu Zhang, Mengshu Sun, Lei Liang, Huajun Chen

Link:  https://arxiv.org/abs/2402.14710v1

Date: 2024-02-22

Summary:

Large Language Models (LLMs) demonstrate remarkable potential across various domains; however, they exhibit a significant performance gap in Information Extraction (IE). Note that high-quality instruction data is the vital key for enhancing the specific capabilities of LLMs, while current IE datasets tend to be small in scale, fragmented, and lack standardized schema. To this end, we introduce IEPile, a comprehensive bilingual (English and Chinese) IE instruction corpus, which contains approximately 0.32B tokens. We construct IEPile by collecting and cleaning 33 existing IE datasets, and introduce schema-based instruction generation to unearth a large-scale corpus. Experimental results on LLaMA and Baichuan demonstrate that using IEPile can enhance the performance of LLMs for IE, especially the zero-shot generalization. We open-source the resource and pre-trained models, hoping to provide valuable support to the NLP community.

--------------------------------------------------------------------------------------------------------

CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks

The paper introduces a novel graph network architecture for enhanced fraud detection in financial transactions. Better fraud prevention models are crucial for consumer protection and maintaining trust.

Authors:  Yifan Duan, Guibin Zhang, Shilong Wang, Xiaojiang Peng, Wang Ziqi, Junyuan Mao, Hao Wu, Xinke Jiang, Kun Wang

Link:  https://arxiv.org/abs/2402.14708v1

Date: 2024-02-22

Summary:

Credit card fraud poses a significant threat to the economy. While Graph Neural Network (GNN)-based fraud detection methods perform well, they often overlook the causal effect of a node's local structure on predictions. This paper introduces a novel method for credit card fraud detection, the \textbf{\underline{Ca}}usal \textbf{\underline{T}}emporal \textbf{\underline{G}}raph \textbf{\underline{N}}eural \textbf{N}etwork (CaT-GNN), which leverages causal invariant learning to reveal inherent correlations within transaction data. By decomposing the problem into discovery and intervention phases, CaT-GNN identifies causal nodes within the transaction graph and applies a causal mixup strategy to enhance the model's robustness and interpretability. CaT-GNN consists of two key components: Causal-Inspector and Causal-Intervener. The Causal-Inspector utilizes attention weights in the temporal attention mechanism to identify causal and environment nodes without introducing additional parameters. Subsequently, the Causal-Intervener performs a causal mixup enhancement on environment nodes based on the set of nodes. Evaluated on three datasets, including a private financial dataset and two public datasets, CaT-GNN demonstrates superior performance over existing state-of-the-art methods. Our findings highlight the potential of integrating causal reasoning with graph neural networks to improve fraud detection capabilities in financial transactions.

--------------------------------------------------------------------------------------------------------

RoboScript: Code Generation for Free-Form Manipulation Tasks across Real and Simulation

The paper presents a platform for generating robot manipulation code from natural language and assessing model reasoning about physical interactions. Advancing code generation and physical reasoning could enable more seamless human-robot collaboration on complex real-world tasks.

Authors:  Junting Chen, Yao Mu, Qiaojun Yu, Tianming Wei, Silang Wu, Zhecheng Yuan, Zhixuan Liang, Chao Yang, Kaipeng Zhang, Wenqi Shao, Yu Qiao, Huazhe Xu, Mingyu Ding, Ping Luo

Link:  https://arxiv.org/abs/2402.14623v1

Date: 2024-02-22

Summary:

Rapid progress in high-level task planning and code generation for open-world robot manipulation has been witnessed in Embodied AI. However, previous studies put much effort into general common sense reasoning and task planning capabilities of large-scale language or multi-modal models, relatively little effort on ensuring the deployability of generated code on real robots, and other fundamental components of autonomous robot systems including robot perception, motion planning, and control. To bridge this ``ideal-to-real'' gap, this paper presents \textbf{RobotScript}, a platform for 1) a deployable robot manipulation pipeline powered by code generation; and 2) a code generation benchmark for robot manipulation tasks in free-form natural language. The RobotScript platform addresses this gap by emphasizing the unified interface with both simulation and real robots, based on abstraction from the Robot Operating System (ROS), ensuring syntax compliance and simulation validation with Gazebo. We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms, and multiple grippers. Additionally, our benchmark assesses reasoning abilities for physical space and constraints, highlighting the differences between GPT-3.5, GPT-4, and Gemini in handling complex physical interactions. Finally, we present a thorough evaluation on the whole system, exploring how each module in the pipeline: code generation, perception, motion planning, and even object geometric properties, impact the overall performance of the system.

--------------------------------------------------------------------------------------------------------

CLCE: An Approach to Refining Cross-Entropy and Contrastive Learning for Optimized Learning Fusion

The paper proposes a novel image model training approach integrating cross-entropy and contrastive learning to enhance accuracy and efficiency. Refining loss functions and model optimization has potential to broadly advance computer vision across application domains.

Authors:  Zijun Long, George Killick, Lipeng Zhuang, Gerardo Aragon-Camarasa, Zaiqiao Meng, Richard Mccreadie

Link:  https://arxiv.org/abs/2402.14551v1

Date: 2024-02-22

Summary:

State-of-the-art pre-trained image models predominantly adopt a two-stage approach: initial unsupervised pre-training on large-scale datasets followed by task-specific fine-tuning using Cross-Entropy loss~(CE). However, it has been demonstrated that CE can compromise model generalization and stability. While recent works employing contrastive learning address some of these limitations by enhancing the quality of embeddings and producing better decision boundaries, they often overlook the importance of hard negative mining and rely on resource intensive and slow training using large sample batches. To counter these issues, we introduce a novel approach named CLCE, which integrates Label-Aware Contrastive Learning with CE. Our approach not only maintains the strengths of both loss functions but also leverages hard negative mining in a synergistic way to enhance performance. Experimental results demonstrate that CLCE significantly outperforms CE in Top-1 accuracy across twelve benchmarks, achieving gains of up to 3.52% in few-shot learning scenarios and 3.41% in transfer learning settings with the BEiT-3 model. Importantly, our proposed CLCE approach effectively mitigates the dependency of contrastive learning on large batch sizes such as 4096 samples per batch, a limitation that has previously constrained the application of contrastive learning in budget-limited hardware environments.

--------------------------------------------------------------------------------------------------------

A Language Model's Guide Through Latent Space

The paper explores using concept vectors to control attributes like humor and creativity in language model outputs. Mastering nuanced attributes could make conversational agents much more useful, trustworthy, and engaging.

Authors:  Dimitri von Rütte, Sotiris Anagnostidis, Gregor Bachmann, Thomas Hofmann

Link:  https://arxiv.org/abs/2402.14433v1

Date: 2024-02-22

Summary:

Concept guidance has emerged as a cheap and simple way to control the behavior of language models by probing their hidden representations for concept vectors and using them to perturb activations at inference time. While the focus of previous work has largely been on truthfulness, in this paper we extend this framework to a richer set of concepts such as appropriateness, humor, creativity and quality, and explore to what degree current detection and guidance strategies work in these challenging settings. To facilitate evaluation, we develop a novel metric for concept guidance that takes into account both the success of concept elicitation as well as the potential degradation in fluency of the guided model. Our extensive experiments reveal that while some concepts such as truthfulness more easily allow for guidance with current techniques, novel concepts such as appropriateness or humor either remain difficult to elicit, need extensive tuning to work, or even experience confusion. Moreover, we find that probes with optimal detection accuracies do not necessarily make for the optimal guides, contradicting previous observations for truthfulness. Our work warrants a deeper investigation into the interplay between detectability, guidability, and the nature of the concept, and we hope that our rich experimental test-bed for guidance research inspires stronger follow-up approaches.

--------------------------------------------------------------------------------------------------------


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.