| Peer-Reviewed

3D Firework Reconstruction from a Given Videos

Received: 16 September 2018     Published: 18 September 2018
Views:       Downloads:
Abstract

Reconstruction of a 3-dimension(3D) firework show from a given videos is a key technology in light source simulation in computer graphics, which can be more effective and real than traditional method. Although the firework model is already very mature, however, to our best knowledge, there is not any existing method that can reconstruct a firework show from a given video. And due to the lack of camera arguments and depth message, reconstruction is very challenging. In this paper, a method is proposed to solve the problem. A rendering model which requires some parameters which describe the color and position information of firework as input and generates a 3D firework show as output is constructed, and then the problem becomes getting the parameters needed for the rendering model from the given video. The parameters are divided into two groups according to the relevance, and then different neural networks including 3D Convolution Neural Network (3D-CNN) and Recurrent Neural Network(RNN) are designed respectively to extract these parameters needed by our rendering model from a given video. It is found to be practicable and effective to reconstruct a 3D firework from a given video by testing this work with some firework videos in various perspective.

Published in International Journal of Information and Communication Sciences (Volume 3, Issue 2)
DOI 10.11648/j.ijics.20180302.13
Page(s) 33-41
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2018. Published by Science Publishing Group

Keywords

3D-Reconstruction, Neural Networks, Firework

References
[1] Reeves W T, Blau R. Approximate and probabilistic algorithms for shading and rendering stuctured particle systems [J]. Acm Siggraph Computer Graphics, 1985, 19(3):313-322.
[2] Reeves W T. Particle systems—a technique for modeling a class of fuzzy objects [M]// Seminal graphics. ACM, 1998:91-108.
[3] Loke T S, Tan D, Seah H S, et al. Rendering Fireworks Displays [J]. IEEE Computer Graphics & Applications, 1992, 12(3):33-43.
[4] Zhang S. Fireworks Simulation Based on Particle System [C]// International Conference on Information and Computing Science. IEEE, 2009:187-190.
[5] Lecun Y L, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE [J]. Proceedings of the IEEE, 1998, 86(11):2278-2324.
[6] Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database [C]// Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009:248-255.
[7] He K, Zhang X, Ren S, et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J]. 2015:1026-1034.
[8] Boureau Y, Bach F, Lecun Y, et al. Learning mid-level features for recognition [J]. 2010, 26(2):2559-2566.
[9] Lipton Z C, Berkowitz J, Elkan C. A Critical Review of Recurrent Neural Networks for Sequence Learning [J]. Computer Science, 2015.
[10] Graves A. Long Short-Term Memory [M]// Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg, 2012:1735-1780.
[11] Graves A, Mohamed A R, Hinton G. Speech recognition with deep recurrent neural networks [C]// IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2013:6645-6649.
[12] Gers F A, Schmidhuber J A, Cummins F A. Learning to Forget: Continual Prediction with LSTM [J]. Neural Computation, 2014, 12(10):2451-2471.
[13] Liu Z, Luo P, Wang X, et al. Deep Learning Face Attributes in the Wild [C]// IEEE International Conference on Computer Vision. IEEE Computer Society, 2015:3730-3738.
[14] He K, Wang Z, Fu Y, et al. Adaptively Weighted Multi-task Deep Network for Person Attribute Classification [C]// ACM on Multimedia Conference. ACM, 2017:1636-1644.
[15] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the Inception Architecture for Computer Vision [J]. 2015:2818-2826.
[16] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks [C]// International Conference on Neural Information Processing Systems. Curran Associates Inc. 2012:1097-1105.
[17] Du T, Bourdev L, Fergus R, et al. Learning Spatiotemporal Features with 3D Convolutional Networks [C]// IEEE International Conference on Computer Vision. IEEE Computer Society, 2015:4489-4497.
[18] Du T, Bourdev L, Fergus R, et al. Learning Spatiotemporal Features with 3D Convolutional Networks [C]// IEEE International Conference on Computer Vision. IEEE, 2016:4489-4497.
[19] Kingma D, Ba J. Adam: A Method for Stochastic Optimization [J]. Computer Science, 2014.
Cite This Article
  • APA Style

    Zhihong Wang, Linyi Hu. (2018). 3D Firework Reconstruction from a Given Videos. International Journal of Information and Communication Sciences, 3(2), 33-41. https://doi.org/10.11648/j.ijics.20180302.13

    Copy | Download

    ACS Style

    Zhihong Wang; Linyi Hu. 3D Firework Reconstruction from a Given Videos. Int. J. Inf. Commun. Sci. 2018, 3(2), 33-41. doi: 10.11648/j.ijics.20180302.13

    Copy | Download

    AMA Style

    Zhihong Wang, Linyi Hu. 3D Firework Reconstruction from a Given Videos. Int J Inf Commun Sci. 2018;3(2):33-41. doi: 10.11648/j.ijics.20180302.13

    Copy | Download

  • @article{10.11648/j.ijics.20180302.13,
      author = {Zhihong Wang and Linyi Hu},
      title = {3D Firework Reconstruction from a Given Videos},
      journal = {International Journal of Information and Communication Sciences},
      volume = {3},
      number = {2},
      pages = {33-41},
      doi = {10.11648/j.ijics.20180302.13},
      url = {https://doi.org/10.11648/j.ijics.20180302.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijics.20180302.13},
      abstract = {Reconstruction of a 3-dimension(3D) firework show from a given videos is a key technology in light source simulation in computer graphics, which can be more effective and real than traditional method. Although the firework model is already very mature, however, to our best knowledge, there is not any existing method that can reconstruct a firework show from a given video. And due to the lack of camera arguments and depth message, reconstruction is very challenging. In this paper, a method is proposed to solve the problem. A rendering model which requires some parameters which describe the color and position information of firework as input and generates a 3D firework show as output is constructed, and then the problem becomes getting the parameters needed for the rendering model from the given video. The parameters are divided into two groups according to the relevance, and then different neural networks including 3D Convolution Neural Network (3D-CNN) and Recurrent Neural Network(RNN) are designed respectively to extract these parameters needed by our rendering model from a given video. It is found to be practicable and effective to reconstruct a 3D firework from a given video by testing this work with some firework videos in various perspective.},
     year = {2018}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - 3D Firework Reconstruction from a Given Videos
    AU  - Zhihong Wang
    AU  - Linyi Hu
    Y1  - 2018/09/18
    PY  - 2018
    N1  - https://doi.org/10.11648/j.ijics.20180302.13
    DO  - 10.11648/j.ijics.20180302.13
    T2  - International Journal of Information and Communication Sciences
    JF  - International Journal of Information and Communication Sciences
    JO  - International Journal of Information and Communication Sciences
    SP  - 33
    EP  - 41
    PB  - Science Publishing Group
    SN  - 2575-1719
    UR  - https://doi.org/10.11648/j.ijics.20180302.13
    AB  - Reconstruction of a 3-dimension(3D) firework show from a given videos is a key technology in light source simulation in computer graphics, which can be more effective and real than traditional method. Although the firework model is already very mature, however, to our best knowledge, there is not any existing method that can reconstruct a firework show from a given video. And due to the lack of camera arguments and depth message, reconstruction is very challenging. In this paper, a method is proposed to solve the problem. A rendering model which requires some parameters which describe the color and position information of firework as input and generates a 3D firework show as output is constructed, and then the problem becomes getting the parameters needed for the rendering model from the given video. The parameters are divided into two groups according to the relevance, and then different neural networks including 3D Convolution Neural Network (3D-CNN) and Recurrent Neural Network(RNN) are designed respectively to extract these parameters needed by our rendering model from a given video. It is found to be practicable and effective to reconstruct a 3D firework from a given video by testing this work with some firework videos in various perspective.
    VL  - 3
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China

  • State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China

  • Sections