1 pointby thw205 hours ago2 comments
  • thw205 hours ago
    This project reveals an interesting phenomena, where LLM converts semantic non-informative tokens to attention sinks through middle layer MLP.

    The converted sinks are termed secondary attention sinks as they are weaker then BOS attention sinks.

    This might be related to layer specialisation in LLM!

  • 5 hours ago
    undefined