Zhaoning Li | tʂɑu niŋ li | 李肇宁

PhD Student | Social Psychology | University of Macau

Causality extraction based on self-attentive BiLSTM-CRF with transferred embeddings


Journal article


Zhaoning Li, Qi Li, Xiaotian Zou, Jiangtao Ren
Neurocomputing, vol. 423, 2021, pp. 207-219


View PDF DOI arXiv Github
Cite

Cite

APA   Click to copy
Li, Z., Li, Q., Zou, X., & Ren, J. (2021). Causality extraction based on self-attentive BiLSTM-CRF with transferred embeddings. Neurocomputing, 423, 207–219. https://doi.org/10.1016/j.neucom.2020.08.078


Chicago/Turabian   Click to copy
Li, Zhaoning, Qi Li, Xiaotian Zou, and Jiangtao Ren. “Causality Extraction Based on Self-Attentive BiLSTM-CRF with Transferred Embeddings.” Neurocomputing 423 (2021): 207–219.


MLA   Click to copy
Li, Zhaoning, et al. “Causality Extraction Based on Self-Attentive BiLSTM-CRF with Transferred Embeddings.” Neurocomputing, vol. 423, 2021, pp. 207–19, doi:10.1016/j.neucom.2020.08.078.


BibTeX   Click to copy

@article{li2021a,
  title = {Causality extraction based on self-attentive BiLSTM-CRF with transferred embeddings},
  year = {2021},
  journal = {Neurocomputing},
  pages = {207-219},
  volume = {423},
  doi = {10.1016/j.neucom.2020.08.078},
  author = {Li, Zhaoning and Li, Qi and Zou, Xiaotian and Ren, Jiangtao}
}

Citations: 104; JCR-Q2; 2022 JIF: 6.0; 2023中科院分区升级版 计算机科学2区Top

Highlights

  • A novel causality tagging scheme has been proposed to serve the causality extraction
  • Transferred embeddings dramatically alleviate the problem of data insufficiency
  • The self-attention mechanism can capture long-range dependencies between causalities
  • Experimental results show that the proposed method outperforms other baselines

Abstract

Causality extraction from natural language texts is a challenging open problem in artificial intelligence. Existing methods utilize patterns, constraints, and machine learning techniques to extract causality, heavily depending on domain knowledge and requiring considerable human effort and time for feature engineering. In this paper, we formulate causality extraction as a sequence labeling problem based on a novel causality tagging scheme. On this basis, we propose a neural causality extractor with the BiLSTM-CRF model as the backbone, named SCITE (Self-attentive BiLSTM-CRF wIth Transferred Embeddings), which can directly extract cause and effect without extracting candidate causal pairs and identifying their relations separately. To address the problem of data insufficiency, we transfer contextual string embeddings, also known as Flair embeddings, which are trained on a large corpus in our task. In addition, to improve the performance of causality extraction, we introduce a multihead self-attention mechanism into SCITE to learn the dependencies between causal words. We evaluate our method on a public dataset, and experimental results demonstrate that our method achieves significant and consistent improvement compared to baselines.

Keywords

Causality extraction, Sequence labeling, BiLSTM-CRF, Flair embeddings, Self-attention
The left side of the figure shows a character CNN structure representing the word ‘‘financial”.
The main structure of SCITE for causality sequence labeling