3D Gaussian Splatting (3DGS) has recently emerged as a significant advancement in 3D scene reconstruction, offering high-resolution, real-time rendering with photorealistic quality and the ability to learn from standard camera captures. Despite its merits, 3DGS often sacrifices geometric accuracy in favor of visual fidelity. In contrast, geometrically precise representations such as voxel grids, point clouds, and meshes are widely used in robotics, especially with the increasing availability of high-accuracy LiDAR and real-time LiDAR-Inertial-Visual (LIV) systems. Although recent research has utilized LiDAR priors to initialize 3D Gaussians, the optimization processes in 3DGS can distort the original geometric information. Furthermore, most existing methods focus on offline 3DGS training, limiting the real-time capabilities of LIV systems. To overcome these limitations, we introduce MEGA, an edgeassisted approach with mesh-aligned 3DGS. MEGA enables online 3DGS training by leveraging incrementally available posed frames, colored LiDAR points, and triangle mesh faces from LIV systems. It employs a novel mesh-aligned representation to dynamically populate 3D Gaussians based on the geometric properties of triangle mesh faces. Additionally, it introduces an image-to-geometry alignment technique to resolve inconsistencies between posed frames and LiDAR priors. Our comprehensive evaluation demonstrates that MEGA achieves superior rendering quality while preserving high-fidelity geometric information.