[关闭]
@HaomingJiang 2018-07-12T16:46:10.000000Z 字数 1769 阅读 808

Review 3537

Summary

This paper proposed a new algorithm for solving the generalized parametric additive models with convergence guarantee. The algorithm solves large scale problem by adopting doubly stochastic optimization and non-orthogonal basis. The convergence analysis is provided. I feel that the contribution is not substantial enough.

Advantages

  1. They investigate the usage of non-orthogonal basis and achieve great empirical results.
  2. They used the Doubly stochastic gradient to solve the problem of massive parameters and samples.

Weakness

  1. The relaxation from (2) to (3) is not very sound to me.
  2. Wrong Template: "Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017)"
  3. The batch size of DSG is not provided.
  4. The way of generating irrelevant features is not very sound to me. For example, when there exists correlation between features, can the algorithm still perform well? Should not we consider real data that is naturally high dimensional?
  5. The convergence analysis is based on the assumption of strong convexity and Lipschitz smoothness. That can be directly concluded from Zhao et al. [24] and can not be considered as an important contribution of this paper.

Review 264

Summary

This paper proposed a new screening method for sparse conditional random field. They introduce a dynamic screening method to accelerate the computation based on the dual optimum estimation technique.

Advantage

  1. Authors carefully explore the structure of the dual problem and proposed a dual optimum estimation.
  2. The novelty of introducing dynamic screening method to sparse CRF.

Weakness

  1. In Lemma3 (iii), what is the meaning of theta_2 ?
  2. Lack of comparison with other sparse CRF algorithms and screening method. (i.e. static screening method)
  3. The numbers in Table 1 is not clear.
  4. The meaning of rejection ratio is not clear. Shouldn't we consider the irrelevant features given by the algorithm with the screening versus the one without the screening?
  5. What is "the general training algorithms"? Reference should be given. Does the framework also work for other algorithms?
添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注