Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 276, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 34, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 791, in read_json
                  json_reader = JsonReader(
                                ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 905, in __init__
                  self.data = self._preprocess_data(data)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
                  data = data.read()
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
                  out = read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^
                File "<frozen codecs>", line 322, in decode
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
                  for key, pa_table in ex_iterable.iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 279, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 242, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Abstractive Visual Understanding of Multi-modal Structured Knowledge: A New Perspective for MLLM Evaluation

Multi-modal large language models (MLLMs) incorporate heterogeneous modalities into LLMs, enabling a comprehensive understanding of diverse scenarios and objects. Despite the proliferation of evaluation benchmarks and leaderboards for MLLMs, they predominantly overlook the critical capacity of MLLMs to comprehend world knowledge with structured abstractions that appear in visual form. To address this gap, we propose a novel evaluation paradigm and devise M3STR, an innovative benchmark grounded in the Multi-Modal Map for STRuctured understanding. This benchmark leverages multi-modal knowledge graphs to synthesize images encapsulating subgraph architectures enriched with multi-modal entities. M3STR necessitates that MLLMs not only recognize the multi-modal entities within the visual inputs but also decipher intricate relational topologies among them. We delineate the benchmark's statistical profiles and automated construction pipeline, accompanied by an extensive empirical analysis of 26 state-of-the-art MLLMs. Our findings reveal persistent deficiencies in processing abstractive visual information with structured knowledge, thereby charting a pivotal trajectory for advancing MLLMs' holistic reasoning capacities.

πŸŽ† News

πŸ“ Dependencies and Supported MLLMs

Supported MLLMs

  • LLaVA: LLaVA-1.5-7B, LLaVA-1.6-vicuna-7B, LLAVA-llama-3-8b-v1.1
  • InstructBLIP: InstructBLIP-7B, InstructBLIP-13B
  • Deepseek-VL: Deepseek-VL-1.3B-chat, Deepseek-VL-7B-chat, Deepseek-VL2-tiny, Deepseek-VL2-small
  • Intern2.5-VL: Intern2.5-VL-1B, Intern2.5-VL-8B
  • Phi-vision: Phi3-vision, Phi3.5-vision
  • MiniCPM-V: MiniCPM-V-2, MiniCPM-V-2.5, MiniCPM-V-2.6
  • Qwen-VL: Qwen2-VL-2B, Qwen2-VL-7B, Qwen2-VL-72B, Qwen2.5-VL-3B, Qwen2.5-VL-7B, Qwen2.5-VL-72B
  • Others: Chameleon-7B

Before you run the evaluate experiments, you should donwload the MLLM weights from Huggingface Model Hub or Model Scope, and set the model_path_map in utils.py.

Note that our code supports several different MLLMs. However, since different MLLM runs require different python environments, we need to configure multiple environments to run evaluations for different models.

  • For most of the MLLMs, you need the basic python environments in envs/requirements_base.txt.
  • For Qwen-VL models, you need to setup a new python environment with the config in envs/requirements_qwen.txt.
  • For Deepseek-VL/VL2, you need envs/requirements_deepseek1.txt and envs/requirements_deepseek2.txt respectively. Note that you should also follow the instructions in official reposities to install the extra libraries required for Deepseek-VL models. We have prepared them in DeepSeek-VL/ and DeepSeek-VL2.

🌲 Data Preparation

We have prepared the images in this repo.

🐊 Evaluation

You can run the evaluation code with the following scripts:

python run_task1_evaluation_vllm.py\
--model_type internvl \
--model_used all

🀝 Citation

@misc{zhang2025abstractivevisualunderstandingmultimodal,
      title={Abstractive Visual Understanding of Multi-modal Structured Knowledge: A New Perspective for MLLM Evaluation}, 
      author={Yichi Zhang and Zhuo Chen and Lingbing Guo and Yajing Xu and Min Zhang and Wen Zhang and Huajun Chen},
      year={2025},
      eprint={2506.01293},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.01293}, 
}
Downloads last month
14

Paper for Zhang-Each/M3STR