• Slide 1
  • Slide 2
  • Slide 3: Main contributions about TALL
  • Slide 4: Our main contributions about Text Mining
  • Slide 5: Free and Easy-to-Use Tools timeline
  • Slide 6
  • Slide 7: A comprehensive workflow
  • Slide 8: Let’s play with TALL!
  • Slide 9: How to install TALL
  • Slide 10: Interface overview
  • Slide 11: TALL buttons
  • Slide 12
  • Slide 13
  • Slide 14: Import text from multiple file formats
  • Slide 15: Sample collections
  • Slide 16: Sample collections – raw data example
  • Slide 17: Load TALL structured files Save your progress and continue later at any time
  • Slide 18: Import Wikipedia pages
  • Slide 19: Dataset visualization
  • Slide 20: Edit, divide, and add external information
  • Slide 21: Universal Dependencies for Linguistic Modeling
  • Slide 22: Automatic Lemmatization and PoS-Tagging through LLM
  • Slide 23: Special entities tagging
  • Slide 24: Special entities tagging
  • Slide 25: Semantic Tagging Automatic Multi-word creation
  • Slide 26: Semantic Tagging Multi-word creation by a list and Custom Term List
  • Slide 27
  • Slide 28
  • Slide 29: Filtering and grouping
  • Slide 30: PoS Tag Selection
  • Slide 31
  • Slide 32: Descriptive statistics Main information
  • Slide 33
  • Slide 34
  • Slide 35
  • Slide 36
  • Slide 37: Most Used Words
  • Slide 38: Most Used Words
  • Slide 39: Multiple methods for Topic Detection
  • Slide 40: Correspondence analysis
  • Slide 41: Tandem analysis: LCA + Hierarchical cluster analysis
  • Slide 42: Correspondence analysis Factorial plane
  • Slide 43: Correspondence analysis Factorial plane
  • Slide 44: Correspondence analysis Dendrogram
  • Slide 45: Correspondence analysis Tables
  • Slide 46: Clustering Hierarchical Clustering
  • Slide 47: Clustering Hierarchical Clustering - Parameters
  • Slide 48: Correspondence analysis and Clustering features
  • Slide 49: From lexical to co-occurrence matrices
  • Slide 50: From lexical to co-occurrence matrices (2)
  • Slide 51: Network analysis
  • Slide 52: Community detection
  • Slide 53: Network – Co-word Analysis
  • Slide 54: Network – Co-word Analysis Parameters
  • Slide 55: Network – Co-word Analysis
  • Slide 56
  • Slide 57: Topic Modeling
  • Slide 58: Latent Dirichlet Allocation (LDA)
  • Slide 59: Topic modeling – K choice
  • Slide 60
  • Slide 61
  • Slide 62
  • Slide 63
  • Slide 64
  • Slide 65: Polarity Detection
  • Slide 66: Polarized lexicons in TALL
  • Slide 67: Lexicon-based polarity scores computing
  • Slide 68: Polarity detection
  • Slide 69: Polarity detection Parameters
  • Slide 70: Polarity detection Top Words
  • Slide 71: Polarity detection Table
  • Slide 72: Summarization
  • Slide 73: Summarization
  • Slide 74: Summarization
  • Slide 75: Summarization Full document and Table
  • Slide 76
  • Slide 77: References
  • Slide 78: References